• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

The hardware to which the trained neural net is deployed can switch between battery and wall socket as power source. It can also disable the usage of the neural net in "battery" mode and enable it in "socket" mode. This reads like Nintendo plans to only allow DLSS to run in docked mode.

This wouldn't surprise me at all. I believe we will still have a 720p screen. Since we are likely looking at performance similar to the PS4 in portable mode, most game should be able to hit 720p native. There will be exceptions for sure, but I see the DLSS implementation being primarily there for docked mode. With Switch, portable play was set to target 720p and docked 1080p, ideally of course. With Switch Redacted, we are looking at a far bigger jump with portable play still targeting 720p but docked play 4K. This factor could even cause a bigger disparity in clock speeds than on Switch. Because of the low rendering resolution for portable play, maybe we see clock speeds only a third of docked for portable play since this would give good battery life. Nintendo has show some willingness to add additional performance profiles, so if it becomes obvious that developers need more performance in portable mode, they could always add a new profile.
 
Neural networks are just billions of linear regressions thrown into a blender because interaction and non-linear effects are fucking horrible to do in linear regression.

This approach makes neural networks mostly a joke for any advanced AI work as you're basically just doing the most brute force thing possible to try to estimate the non-linear and interaction effects of variables, but GPUs and TPUs are strong enough now to be able to do this brute force work well enough for it to be viable.

The extreme inefficiency of neural networks and the newness of their usage means they're also getting a lot more efficient each year as you can cull huge chunks of your neural network almost always with almost no loss.

It's very bad to think of neural networks as anything like AI and more than "linear regression applied to problems where interaction effects and non-linear effects matter massively"

This brute force approach is so comically inefficient that OpenAI is nearly out of text to train their new versions of GPT on.

The hype about "oh Skynet!!!" relies on a bunch of dummies not realizing how inefficient neural networks are.

Neural networks are good for image reconstruction and frame generation and transcription and grammar checking and generating generic text and generating generic textures and generating generic animations and probably giving feedback and all of this is extremely good. It is not close to reaching AI capabilities in sci-fi and is too inefficient to get there.
 
Last edited:
At any time, you would need to wonder if training and employing a neural network for a given effect is worthwhile. Some scenarios where it could be worth it perhaps is when you want to compute complex fluid dynamics (the real formula is notoriously difficult to evaluate efficiently so a functional approximation could work wonders) or when you want to simulate realistic movement of light object through the air (e.g. as part of a game mechanic). Neural Networks can help when the mathematical underpinnings are either unknown (text generation) or too expensive to evaluate with high accuracy (fluid dynamics).
this is the biggest issue I've been trying to deal with, and one I figure has been such a solved problem for games that there aren't too many worthwhile examples. especially now, there are a lot of physics sims that are fast and "good enough" through compute that AI hardware might not gain too much performance. but I haven't seen many examples of more larger scale game sims done with inference in a gaming environment
 
0
* Hidden text: cannot be quoted. *
Super grain of salt etc, but let's assume for a moment this is true,
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


Edit: Before anyone gets too excited by this even though they said grain of salt, note this internal contradiction:
"Also I have not been in contact with them for a long time"
vs
"What is funny is I had this information just 2 days before people were discussing on ... yesterday"
 
Last edited:
The biggest usage for AI in game design is almost never real time, but is for generating textures, animations, text etc to use as assets in the game.

Real time generating of text locally would be such a catastrophic waste of resources that it borders on comical.

You would just do that in development or on the cloud (hence not needing any tensor cores at all)

Local ChatGPT would probably take 20 GBs of RAM after significant optimization.

Don't expect any real time local AI usage other than image reconstruction for the Switch 2.
 
* Hidden text: cannot be quoted. *
I guess this could have some ok, even nice implementations but sounds to me as attractive to the masses (from a commercial point of view) and even to developers as WiiU's asymmetrical gameplay...
 
Real time generating of text locally would be such a catastrophic waste of resources that it borders on comical. [...] Local ChatGPT would probably take 20 GBs of RAM after significant optimization.
You don't need chatGPT level of complexity to make real time text generation useful in a video game.
 
I would think that, by now, Nvidia would have made some examples of tensor cores being usable in games if they found some theoretically viable use-cases. they're not one to pass on a chance to jack themselves off
 
Edit: Before anyone gets too excited by this even though they said grain of salt, note this internal contradiction:
"Also I have not been in contact with them for a long time"
vs
"What is funny is I had this information just 2 days before people were discussing on ... yesterday"
I read (between what I assumed was awkward phrasing) that first sentence as "I have only been in touch with them a short while" - not "I haven't heard from them in a long time".
 
How does DLSS interact with HDR, if at all?
You can run DLSS either in HDR before tonemapping (in FP16, I believe) or low dynamic range after tonemapping (in INT8).

For the HDR version, it’s recommended that you also include the exposure as an input for the model, which is the same value that would be used by the tonemapper. There is an autoexposure feature too, but it’s not recommended.

In the programming guide, look at sections 3.1.1, 3.9, 3.10 for all the details.
 
Another very big issue with generating text in video games is that GPT is trained mostly on technical, non-fictional text and therefore is good for writing copy and VERY bad for writing fiction.

You're going to have to do a massive amount of personalized training on GPT to get it to produce the type of fictional content you want.

This is going to be best used for some RPG that gives every NPC 100s of lines of dialog, but you'll need your writing team to have produced to huge amount of text already for each NPC.

And obviously this is not something that fits with any Nintendo game other than Fire Emblem and Xenoblade.
 
I assumed the claim was around it being less effective for Switch’s power consumption or similar. Every video that shows the boost a good DLSS implementation gives games on PC looks like magic to me.
This is also the case. Here's a web tool someone from these parts made that tries to estimate how much time DLSS will take on Drake given a user-provided clock speed since we have little idea how that will go. Whereas on a desktop computer the DLSS costs are small enough to not be worried about, in a Drake scenario closer to the worse end it could end up that a game wanting to DLSS to 4K could be using nearly half its graphical processing time on that if it was a 60fps game. So a developer might need to make calls like choosing between DLSS 4K and having 50% of GPU time to use for everything else, versus DLSS 1440p and having 75% of GPU time left to use for everything else, versus DLSS 1440p at 30fps and having 88% of GPU time left to use for everything else.
I mean, doesn't DLSS 2.xx work well on even an RTX 2060.

If the Switch can't get the DLSS 2.xx performance of the RTX 2060 despite releasing in late 2023 to late 2024... That would be pretty rough.
It's going to be behind 2060 in almost every way, just the nature of being tiny and using very little power. It's like putting a Switch next to a GTX 660, whereas I've kind of thought of Switch as the 910 that never existed.
Am I really that forgettable?
Here’s the quote with link to confirm the data:
Sorry, man, the vast majority of people around here end up blending together unless they're a raccoon with a raccoon picture or something. :)
 
If it is not announced this year we will really not know.
Should we assume that T239 has been scrapped?

I would say if Switch Redacted hasn't been released or at least announced with a release date by this time next year, then yes, it was probably cancelled. The question is, what would cause a SOC to be so far along that it was going through the taping on process but ended up getting cancelled? I have yet to hear a good explanation for how/why that would happen. When your that deep into development, there really wouldn't be any big surprises. They would have known everything they would need to know by then, so why would you cancel a product right before its ready for production? Even with chip shortages and high prices, I could see that causing Nintendo to pump the brakes a bit and wait for things to settle down before moving forward, but to outright cancel a product that was deep into development and go back to the drawing board with nothing ready any time soon. Its fair to assume the next hardware will be a Switch successor, so what would it be about T239 that would cause Nintendo to pull the plug? They wouldn't have needed to get to the taping out process to know if performance or power draw were insufficient.
 
In practice, what will happen is that the tensor cores will simply not be used at all in handheld mode as using DLSS to push the handheld from 480p to 720p is a massive waste.

These will be a docked only thing for like 99.99% of software released.
 
I would think that, by now, Nvidia would have made some examples of tensor cores being usable in games if they found some theoretically viable use-cases. they're not one to pass on a chance to jack themselves off
Nintendo themselves would be a lot more interested in testing out stuff like that than Nvidia I'd imagine.
 
Nintendo themselves would be a lot more interested in testing out stuff like that than Nvidia I'd imagine.

1. No
2. Nintendo's technical skill here is so far behind NVIDIA's to make the idea that they could without years and years and years of effort very dubious.

NVIDIA has worked with this hardware for a very long time, Nintendo has literally never.
 

This doesn't read like Drake won't release for another few years. If anything, they don't want to pull off a Sega by talking about a successor before a full-on announcement, which for Sega when they did, affected sales of their current platform at the time. This was not an issue back when Nintendo mentioned the NX because sales of Wii U and 3DS were already on the low end, so NX helped spark people's interest. Doing something like that right now would negatively impact Nintendo Switch sales.
 
1. No
2. Nintendo's technical skill here is so far behind NVIDIA's to make the idea that they could without years and years and years of effort very dubious.

NVIDIA has worked with this hardware for a very long time, Nintendo has literally never.
Nvidia probably wouldn't invest significant time into showing off some gameplay feature that requires AI cores. Cause nobody would make a game with RTX as a minimum requirement. A game console changes the equation.
 
I normally don't trust people on the internet and certainly not video games insiders, but @KMStwo has yeah-ed some of my posts and what he's saying aligns well with what my tea leaves were predicting back in January, so I trust him.
 
sorry friend but I don't trust like that
Regardless of the veracity of this particular post I would be pretty shocked if Nintendo didn't at least try to do something creative and unexpected with this new technology. So it's not exactly anything surprising if true.
 

Very interesting that he addresses it and is not a straight denial or even a 'we're always working on a successor'.

The emphasis on the strength of the current Switch is also not a lie, just a Nintendo routinely talks about the older hardware even as a new hardware is announced or released, as there is always a 1-2 year overlap where they wind down the prior console, so all he says about the Switch could be 100% true and they intend to milk it.
 
Speaking of AI applications, I was thinking... what about on-the-fly texture upscaling?

That way we could save on storage (only store lower-res textures) as well as reduce memory bandwidth stress (since we'll only be streaming those lower-res textures), and then at some point later down the pipeline, have an AI program upscale them. Pehraps not for all textures, but for select "background" textures"?

Idk. No clue if this even makes sense.
 
Very interesting that he addresses it and is not a straight denial or even a 'we're always working on a successor'.

The emphasis on the strength of the current Switch is also not a lie, just a Nintendo routinely talks about the older hardware even as a new hardware is announced or released, as there is always a 1-2 year overlap where they wind down the prior console, so all he says about the Switch could be 100% true and they intend to milk it.
doom answer, it's almost as though due to a project getting scrapped they aren't actively working on a successor for once
 
NVIDIA has worked with this hardware for a very long time, Nintendo has literally never.

Seems quite a bold statement honestly.

Companies do internal testing/evaluations all the time, why should it be different this time?

Especially when we're talking about a partnership that has been going on for almost a decade: I'm quite confident Nvidia has already given a few toys to Nintendo software developers for them to fiddle with.
 



I interpret this in 1 of 2 ways:

1) They are out of their minds and will push this device with no successor/revision until 2027 marking 10 years. Very unlikely.

2) Crossgen,, crossgen, crossgen. GB NSO was just announced, meaning the service is here to stay. Furthermore, that means Nintendo SWITCH Online will continue to be supported at the very least. And might I add MP4 still has no launch date?

I think they will continue to drop games for it and the successor.
 
1. No
2. Nintendo's technical skill here is so far behind NVIDIA's to make the idea that they could without years and years and years of effort very dubious.

NVIDIA has worked with this hardware for a very long time, Nintendo has literally never.
eeeeeeeeeh, that's really underselling how easy it is for people to get into things. independent researchers have made advances so why couldn't a big company like nintendo? I mean, we just now got an updated patent on Nintendo working on home grown AI upscaling. after that, you can't make definitive statements like this. if Nvidia opens up the black boxes to them, then relevant people at Nintendo will test stuff out
 
Nvidia made Portal RTX to justify the 4090, they'd absolutely make ray traced based physX&hairworks to sell you a 5090.

Quake RTX was the same idea, the tech is good, but it's novel enough that Nvidia needs to showcase it like this. I'm 100% sure they paid CD Projekt for the path traced version of Cyberpunk
 
Nvidia made Portal RTX to justify the 4090, they'd absolutely make ray traced based physX&hairworks to sell you a 5090.

Quake RTX was the same idea, the tech is good, but it's novel enough that Nvidia needs to showcase it like this. I'm 100% sure they paid CD Projekt for the path traced version of Cyberpunk
unlike tensor cores, these are all agnostic. hell, any tensor core research could be agnostic if other companies' AI cores are similar enough

and funny you mention ray traced-based hair
While the horsepower of modern GPUs is booming, physically based real-time hair rendering is still far from being solved. While card-based hair used to be the most common modeling technique in games, strand-based hair is being used more and more frequently. However, drawing hair as millions of thin triangle strips with rasterization causes overdraw issues, putting a huge burden on GPUs. Considering the climbing ray-tracing performance of NVIDIA GPUs, it's worth exploring whether real-time ray-traced rendering can be a better choice than rasterization. We'll describe several attempts at using NVIDIA RTX to render hair strands with better efficiency and quality. There are three main aspects included: (1) efficient and accurate ray-strand intersection in DXR; (2) hybrid multiple scattering solutions and (3) strands-targeted de-noising.
 
Like most Nintendo executives, the next revealing Bowser interview will be his first. Kudos to the interviewer for asking about the Successor and the $70 Zelda though. And lol for the Madden question.
 
I interpret this in 1 of 2 ways:

1) They are out of their minds and will push this device with no successor/revision until 2027 marking 10 years. Very unlikely.

2) Crossgen,, crossgen, crossgen. GB NSO was just announced, meaning the service is here to stay. Furthermore, that means Nintendo SWITCH Online will continue to be supported at the very least. And might I add MP4 still has no launch date?

I think they will continue to drop games for it and the successor.
I would say (2) with the added context that for example PlayStation were doing cross-gen as late as November 2022 with one of their biggest franchises, so it's a pretty reasonable thing to expect for all hardware publishers imo. I could see Switch releases continuing into 2025 even if Switch 2 releases late 2023.

Edit: At the same time, we will of course see releases only on Switch 2 that cannot be made to run on Switch 1, just like R&C: Rift Apart.
 
unlike tensor cores, these are all agnostic. hell, any tensor core research could be agnostic if other companies' AI cores are similar enough

and funny you mention ray traced-based hair

Yes, hair seemed to make sense as something you would want to tackle through ray tracing.

I don't think the point of these games is to create nvidia "exclusives" as much as to showcase what these cards are capable of. Portal RTX was the benchmark used to show how powerful the 4090 is after all
 
Sub-GPT2 levels kind of suck ass and a hyper optimized GPT-2 is probably sucking up at least 4 GBs of RAM.

Sub-GPT2 and you might as well just develop a complex Madlibs structure as it won't be much worse and you'll be using almost no RAM.
There are some developers in this thread - myself included - who would disagree with you
 
Yes, hair seemed to make sense as something you would want to tackle through ray tracing.

I don't think the point of these games is to create nvidia "exclusives" as much as to showcase what these cards are capable of. Portal RTX was the benchmark used to show how powerful the 4090 is after all
they definitely aren't, but as open as their development tools are, they aren't just doing this to sell hardware, I think. I think Nvidia wants to be the definer of technology. remember it was Nvidia who coined "GPU" after all
 
There are some developers in this thread - myself included - who would disagree with you

Yeah, and I have actually used GPT-2 and 3 and neural networks in general a lot and massively disagree that it has any potential for real time usage in video games. The RAM requirements are just absurd if you want performance better than "absolutely terrible."

This is just a production side tool for creating more text in a game that already has a massive amount of text.

If you would ever do real time text generation for some reason, it would obviously be done on cloud servers where you don't have these technical limitations and then the text would be streamed to the game itself. Always online games are very common and text is very small data-wise so it would barely take up any bandwidth from a user to do.
 
0
Yeah I'll need specific examples of anything AI related before I get excited or interested in it.

And even if they don't know the "technical and hardware aspects", do they know an approximate completion date target for whatever project they are working on?
I may not be them but if that person

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Last edited:
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Back
Top Bottom