Seems quite a bold statement honestly.
Companies do internal testing/evaluations all the time, why should it be different this time?
Especially when we're talking about a partnership that has been going on for almost a decade: I'm quite confident Nvidia has already given a few toys to Nintendo software developers for them to fiddle with.
NERD has been patenting DLSS-related stuff for a few years.
We're not talking about ChatGPT-level competition, and the processing power available will be very constrained.
It shouldn't be that hard for a company like Nintendo to come up with a usable gimmick, specially when they're partnering with Nvidia.
Machine learning is going to change gaming forever by allowing much faster generation of textures and animations in productions.
It will have no real time usage whatsoever on the Switch 2 other than hopefully image reconstruction.
The thing you need to realize about neural networks is that they're a joke structure that only works via brute forcing a massive amount of data that could be fit into a regression.
Even if Nintendo somehow came up with a neural network idea to use in real time, they probably wouldn't have the vast amount of data actually needed to make the neural network work.
I disagree.
I agree that AI will grow an order of magnitude in video game production in the coming years, but some if it can and will trickle down to real-time processing.
It's not that hard to come up with useful gimmicks, even with the limited power. It will always be a per-game scenario.
If the next Switch has a camera, those Tensor cores will certainly be used for AR and object detection.
There are interesting speech-generation tools that even run on mobile. I'm surprised those haven't made it yet into video games.
All the voice "acting" could be generated in production and all the sound files included in the game. It would save on development time and expenses with voice actors.
But considering how Nintendo likes to optimize install size, and how expensive mobile gaming storage is, Nintendo could use real-time speech generation.
If it's convincing enough for 95% of the voices, while still relying on voice-acting for main characters, it will be done.
Speech-recognition is another untapped area in gaming. People would probably try it once then never again. It would nonetheless make for a good marketable gimmick, better that IR sensors or whatnot.
In a real-time strategy game? -"Team 1 do this, team 2 do that."
In an action game with a companion? -"Protect me. Cast that spell. Attack the big one. Hold the little ones."
In a visual novel? Have unscripted conversations.
It may not be that useful in practice, at least the first iteration, but it would sell itself to casuals.
BotW has amazing enemy AI that reacts to a lot of possible conditions. Their code implementation is probably very complex and very difficult to evolve without an occasional full-rewrite.
Having an AI stack for this would dramatically simplify their codebase and allow for future possibilities never seen in gaming.
A game like Nintendogs would be the perfect test bed for behavioural AI. I'm pretty sure it would expand into enemies and NPCs of a lot of games.