• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

I think it would be a good move for both of them to take this into account, but I understand that this model might be in development for a long time now, and it might already be too late.
There are also still pretty substantial performance penalties when enabling ray tracing, especially when using mid-range consumer Ampere GPUs, as well as entry-level laptop Ampere GPUs, although DLSS can mitigate the performance penalties associated with ray tracing, albeit with image quality taking a slight hit.

At the end of the day, Nvidia's ultimately responsible for convincing Nintendo that the benefits of adding RT cores to Dane significantly outweigh the drawbacks. And I personally think Nvidia has a 45% to 55% chance of convincing Nintendo to add RT cores to Dane.
 
There are also still pretty substantial performance penalties when enabling ray tracing, especially when using mid-range consumer Ampere GPUs, as well as entry-level laptop Ampere GPUs, although DLSS can mitigate the performance penalties associated with ray tracing, albeit with image quality taking a slight hit.

At the end of the day, Nvidia's ultimately responsible for convincing Nintendo that the benefits of adding RT cores to Dane significantly outweigh the drawbacks. And I personally think Nvidia has a 45% to 55% chance of convincing Nintendo to add RT cores to Dane.
It's hard to imagine Nintendo being so forward-thinking, but if Nvidia has succeeded it will be great to see upcoming first-parties make use of this technology.
 
the recent talk of DLSS compute time vs tensor core performance got me thinking about the theory that Xavier kits being in the hands of devs. Gen 1 tensor cores probably aren't that good with DLSS. so if they got it working with Xavier, it probably is a lower quality algorithm

It's hard to imagine Nintendo being so forward-thinking, but if Nvidia has succeeded it will be great to see upcoming first-parties make use of this technology.
it's less about "forward thinking" and more about extracting performance from a given power input. RT isn't efficient when input power is so low.

speaking of, maybe Nintendo adopts voxelized methods?

 
0
What are people expecting the wattage envelope on the new switch to be? In the same range as OG Switch about 10 watts or higher?
 
For the devkits approximating Dane that are out to developers, what would those consist of? Some sort of PC rig? How do you even approximate for a SoC that’s not taped out? It couldn’t be as simple as sticking an Xavier-based SoC in there, could it? We know there’s compatibility issues between Maxwell and other Nvidia architectures if they’re not going to be natively BC.
 
For the devkits approximating Dane that are out to developers, what would those consist of? Some sort of PC rig? How do you even approximate for a SoC that’s not taped out? It couldn’t be as simple as sticking an Xavier-based SoC in there, could it? We know there’s compatibility issues between Maxwell and other Nvidia architectures if they’re not going to be natively BC.
you can get an arm-based pc with a tensor core enabled gpu. there aren't really approximations for an 8 big core cpu, but large companies like Nvidia would be able to get access to them
 
It's crazy to think we will be getting a portable PS4 experience in handheld mode. Being able to play all PS4 games without a hitch and without relying on DLSS, in 720p. Hell in some cases even better performance due to better CPU. Something like 700-800 GFLOPs in ampere would take us there I think. Even a worse case scenario would be xbone base performance parity at 720p. Doom 2016/Eternal and Witcher 3 with all the details and 60fps (for Doom) is insane!

I wonder how the RAM bandwidth would be allocated though in handheld. 🤔, assuming max is 102GB. I never understood handheld mode for current switch being capped to 1333GHz vs the full 1600 Ghz. Only a 20% difference. 🤔

You got a summary?
The developer talks a lot about the transition from the PS360 gen to PS4XB1. What specs they wanted, how what they got impacted AAA game development. It’s not often we hear from actual modern AAA developers so I thought some of you might enjoy it. RAM size and bandwidth was the biggest win for them aswell as every system having a hard drive and XB1 having a blu ray drive.

Switch is also in between generations in terms of specs so it has an adjacent link to this topic considering the hope is Switch 4K is expected to be between XB1-PS4 in real World performance with DLSS on top.
 
0
They can focus on RT for the Switch 2 Pro as one of the upgrades. I personally think they should prioritize the tensor cores and getting as many gpu cores in the base Switch 2 within the given space constraints. I don't how much space RT cores take up but if it's noticeable I wouldn't mind waiting for a Switch 2 pro or Switch 3 to have that functionality.
 
0
Doesn’t sound feasible for portable mode in it’s current iteration then.
using the current model. but a bespoke model that prioritizes performance over quality could do so. and since NERD is also working on their own version, they would make the model specifically for Dane's hardware rather than much more powerful hardware
 
0
you can get an arm-based pc with a tensor core enabled gpu. there aren't really approximations for an 8 big core cpu, but large companies like Nvidia would be able to get access to them
That might not be necessary as far as preliminary devkits are concerned, with the recent announcement of Arm Virtual Hardware.

And Nvidia's Carmel CPU configuration on the Jetson AGX Xavier devkit is probably the closest approximation to a homogeneous configuration of 8 CPU cores.
 
That might not be necessary as far as preliminary devkits are concerned, with the recent announcement of Arm Virtual Hardware.

And Nvidia's Carmel CPU configuration on the Jetson AGX Xavier devkit is probably the closest approximation to a homogeneous configuration of 8 CPU cores.
Nvidia has been in the position to offer virtualized dev kits for a while now. it as an option to the earliest partners isn't a crazy idea.

the case with using Xavier as an approximation is that I wonder if its gen 1 tensor cores are up to the task of upscaling
 
0
Would it be possible for Switch 2 Devs to have the option to change clock speeds depending on their game requirements.

For example:

Dev A: Clocks gpu at a frequency of 1.3 tflops in docked mode so they can clock cpu at 2.2ghz in docked mode
Dev B: Clocks Cpu at 1.4 ghz in docked mode so they can clock the gpu at a frequnecy to get 2.1 tflops in docked mode

I never understood why console devs didn't seem to be allowed to pick the clocks speeds based on their development needs, so i just assumed it wasn't possible.
 
I'm not convinced RT will be a thing for Switch 2. I don't think it's powerful enough to have any meaningful impact on a co sole that is only 2TFLOPs, at least when you compare it to the current RT supported GPU cards. You likely need something like an RTX 2060, which is 6 TFLOPs. And having DLSS is a must for performance because otherwise RT games take up a lot of GPU power and tank framerate to 30fps or less.
There is some RT being experimented vis tech demos, so it could get interesting. Just not expecting RT, unless Nintendo releases a separate home console that is 6 TFLOPs or higher that is meant to play switch 2 games at higher performances.
 
Made a logo for the new switch

image.png
 
Would it be possible for Switch 2 Devs to have the option to change clock speeds depending on their game requirements.

For example:

Dev A: Clocks gpu at a frequency of 1.3 tflops in docked mode so they can clock cpu at 2.2ghz in docked mode
Dev B: Clocks Cpu at 1.4 ghz in docked mode so they can clock the gpu at a frequnecy to get 2.1 tflops in docked mode

I never understood why console devs didn't seem to be allowed to pick the clocks speeds based on their development needs, so i just assumed it wasn't possible.
the issue with CPU clocks is that the discrepancy can cause issues with handheld and docked modes. if performance and design is so tied to clocks, then it rules out handheld modes

I'm not convinced RT will be a thing for Switch 2. I don't think it's powerful enough to have any meaningful impact on a co sole that is only 2TFLOPs, at least when you compare it to the current RT supported GPU cards. You likely need something like an RTX 2060, which is 6 TFLOPs. And having DLSS is a must for performance because otherwise RT games take up a lot of GPU power and tank framerate to 30fps or less.
There is some RT being experimented vis tech demos, so it could get interesting. Just not expecting RT, unless Nintendo releases a separate home console that is 6 TFLOPs or higher that is meant to play switch 2 games at higher performances.
there's always software solutions like Lumen. and voxel solutions like SVOGI that's already in the Switch versions of Crysis remastered. hell even Nvidia's RTXGI is usable on non-RTX hardware including the XBO and PS4

 
there's always software solutions like Lumen. and voxel solutions like SVOGI that's already in the Switch versions of Crysis remastered. hell even Nvidia's RTXGI is usable on non-RTX hardware including the XBO and PS4


Yeah, and considering how much Nintendo focuses on Lighting in their games, this would be a godsend I say
 
0
If the DLSS Switch is the true successor to the Switch then how are we looking with things like internal storage and cartridge size costs these days?

A lot of games from the PS4 era take up a lot more space and require mandatory installs when compared to the PS3 era. If we’re expecting the DLSS Switch to be roughly around PS4 level in power then we can expect games taking up much more memory.

Have we reached a point where a 32gb Switch cartridge isn’t really expensive for publishers?
This was part of why I asked about hardware acceleration for the AV1 codec. PS4/XBO games used a lot of poorly-compressed video and audio files to ensure the CPU and GPU didn't have to spend excess cycles to decompress them. Hardware acceleration for advanced audio-visual compression to reduce file sizes can greatly improve game package sizes with FAR fewer CPU/GPU cycles, making it comparable to using an uncompressed lossless audio file or an h.264 video file.

As it was, some games were diminished in size using these techniques. Crash Trilogy was 5GB on Switch compared to 20GB on PS4, and that's because the Switch CPU was actually more capable in this specific way than PS4's.
 
Would it be possible for Switch 2 Devs to have the option to change clock speeds depending on their game requirements.

For example:

Dev A: Clocks gpu at a frequency of 1.3 tflops in docked mode so they can clock cpu at 2.2ghz in docked mode
Dev B: Clocks Cpu at 1.4 ghz in docked mode so they can clock the gpu at a frequnecy to get 2.1 tflops in docked mode

I never understood why console devs didn't seem to be allowed to pick the clocks speeds based on their development needs, so i just assumed it wasn't possible.
The Switch OS already allows some limited ability to change clocks, but only to presets picked out by Nintendo. They do have some special presets like the one that diverts most of the power to the CPU for loading, but I don't think they'll go too crazy adding new ones for Dane.
 
The Switch OS already allows some limited ability to change clocks, but only to presets picked out by Nintendo. They do have some special presets like the one that diverts most of the power to the CPU for loading, but I don't think they'll go too crazy adding new ones for Dane.
they could use the gpu for decompression instead of the cpu with Dane. might be faster and use less power
 
0
I'd second that it's likely for there to be AV1 decode (ie playback) acceleration. It seems unlikely that the NVENC and NVDEC blocks will be ripped out (as I get the impression that Tegra X1 still had Maxwell's NVENC/NVDEC capabilities), so for the time being, the most likely assumption is that Dane will at least have Ampere's encoding/decoding capabilities.

...that said, to be a bit more thorough, there is a curveball. Looking at wikipedia, the A100's NVDEC doesn't have AV1 support, huh. But that's older than desktop Ampere.
 
I'd second that it's likely for there to be AV1 decode (ie playback) acceleration. It seems unlikely that the NVENC and NVDEC blocks will be ripped out (as I get the impression that Tegra X1 still had Maxwell's NVENC/NVDEC capabilities), so for the time being, the most likely assumption is that Dane will at least have Ampere's encoding/decoding capabilities.

...that said, to be a bit more thorough, there is a curveball. Looking at wikipedia, the A100's NVDEC doesn't have AV1 support, huh. But that's older than desktop Ampere.
Yes, the X1 did have the NVDEC. The Yuzu emulator needed to support calls to it to support video playback.

 
We did discuss it, and I agree. I posted a similar thought a few days ago, when ILikeFeet shared a post pointing out that even the desktop version of DLSS could likely get marginally better image quality if it were deeper, but that Nvidia probably selected the network size to balance image quality with their performance goals on desktop Turing/Ampere. It seems very reasonable to me that the solution for the Dane Switch will apply the same principle and choose a lighter weight network architecture, with either fewer layers or fewer channels within each layer.

(also, welcome to the new board!)

Thanks! Hopefully some kind of a leak will shed more light on this, although I suspect even after the new device has been released we probably won't be much the wiser on the Nintendo-specific DLSS version.

I don't know if you missed it but Nintendo (NERD specifically) filed a patent application back in March of last year for their own method of machine learning upscaling for gaming. This plus the fact that NERD confirmed they have their own deep learning solution (in PR released around when Super Mario 3D All Stars released) makes me agree that whatever machine learning solution they'll use for Dane it likely won't be standard DLSS, at least when it comes to their games.

I'm guessing they want something better tailored to their own engines and art styles, and more importantly something that works better for older games they they want to upres.

I had seen it, although didn't have the chance to read it thoroughly. My understanding, correct me if I'm wrong, is that the patent didn't describe any temporal component, which makes it quite different (and complementary) to DLSS, which does have a temporal component. All other things being equal, an ML upscaling approach with a temporal component like DLSS should perform better, but requires more work to embed within a game engine. I'd expect any new game developed primarily for the new device to use DLSS, but for older games a non-temporal ML upscaling solution could be almost plug-and-play, and it could be a way for Nintendo to easily patch old titles to get 4K output with minimal effort.

In fact, it would probably be possible to apply non-temporal ML upscaling to the entire existing library at an OS level without patching, although it would result in things like UI elements being upscaled (which may or may not be desirable) and would probably have to be toggled on a per-game basis, as you wouldn't want to apply it to pixel art 2D games, for example.
 
I had seen it, although didn't have the chance to read it thoroughly. My understanding, correct me if I'm wrong, is that the patent didn't describe any temporal component, which makes it quite different (and complementary) to DLSS, which does have a temporal component. All other things being equal, an ML upscaling approach with a temporal component like DLSS should perform better, but requires more work to embed within a game engine. I'd expect any new game developed primarily for the new device to use DLSS, but for older games a non-temporal ML upscaling solution could be almost plug-and-play, and it could be a way for Nintendo to easily patch old titles to get 4K output with minimal effort.

In fact, it would probably be possible to apply non-temporal ML upscaling to the entire existing library at an OS level without patching, although it would result in things like UI elements being upscaled (which may or may not be desirable) and would probably have to be toggled on a per-game basis, as you wouldn't want to apply it to pixel art 2D games, for example.
Hmmm maybe I misunderstand what "temporal" refers to in this field but I assumed it was the use of motion vectors from previous frames in the reconstruction, which those patents do mention.

But yeah I do believe a lot of their research and development of this solution is likely geared towards emulation of their old library.
 
Hmmm maybe I misunderstand what "temporal" refers to in this field but I assumed it was the use of motion vectors from previous frames in the reconstruction, which those patents do mention.

But yeah I do believe a lot of their research and development of this solution is likely geared towards emulation of their old library.
What I mean by temporal is using data from previous frames, typically with pixels jittered between each frame, and also incorporating motion vectors. Basically taking the same kind of data as TAA. I had another look over the patent, and while it does mention motion vectors, it only seems to be provided as an example of other data that may be fed into the network, such as z-buffers. This might improve the image quality (if motion vectors are available), but I wouldn't expect it to approach DLSS-style upscaling in quality.

The actual structure of the network would also have to be quite different if it were a true temporal reconstruction approach, as there's not just a single frame of pixel data coming in. It would most likely take the form of a recurrent neural net, but the example network configurations they show seem to be pretty straight-forward single-frame upscaling.

But straight-forward single-frame upscaling is the best option for their old library anyway. Going back into old engines to extract motion vectors and implement pixel jittering just wouldn't be worth it, but an approach like they describe in the patent application should be very simple in comparison, to the point where they can cheaply include upscaling in a large portion of their back-catalog.
 
I don't see the point in upscaling the back catalogue. just going back and increasing the resolution is enough. unless you mean post-postprocessing upscaling, which I'd rather they avoid
Yeah. Hopefully they figure something out regarding the resolution of 3D NSO games. I don’t want N64 to still be 720p docked on Switch 4K, even if that means running games at higher than native res in handheld.
 
0
This was part of why I asked about hardware acceleration for the AV1 codec. PS4/XBO games used a lot of poorly-compressed video and audio files to ensure the CPU and GPU didn't have to spend excess cycles to decompress them. Hardware acceleration for advanced audio-visual compression to reduce file sizes can greatly improve game package sizes with FAR fewer CPU/GPU cycles, making it comparable to using an uncompressed lossless audio file or an h.264 video file.

As it was, some games were diminished in size using these techniques. Crash Trilogy was 5GB on Switch compared to 20GB on PS4, and that's because the Switch CPU was actually more capable in this specific way than PS4's.
Interesting, so Ampere would presumably do this much better correct? and I thought this would only be for video streaming and not for scripted sequences in Real Time Graphics like the cutscenes in say… Metroid Dread.
 
Interesting, so Ampere would presumably do this much better correct? and I thought this would only be for video streaming and not for scripted sequences in Real Time Graphics like the cutscenes in say… Metroid Dread.
Video streaming, pre-rendered cutscenes (which are still in use in AAA games, to mask loading times) and (depending what audio hardware acceleration is available with the Orin CPU) any audio files that would benefit from lossless compression.

Ampere is capable of decoding AV1, which is what Netflix uses whenever possible to provide 4K video without obscene download requirements, which means smaller file sizes for pre-rendered video of any size (when AV1 isn't available, it uses VP9).
Tegra X1 also had compressed video hardware acceleration, but was limited to AV1's pseudo-predecessor, VP9, which was good, but not as good as AV1 is.
Not 100% sure what games used for compressed audio on Switch or what is available with the Orin CPU, but it seems likely some form of audio compression was previously used to slim down game package sizes.
 
Last edited:
What I mean by temporal is using data from previous frames, typically with pixels jittered between each frame, and also incorporating motion vectors. Basically taking the same kind of data as TAA. I had another look over the patent, and while it does mention motion vectors, it only seems to be provided as an example of other data that may be fed into the network, such as z-buffers. This might improve the image quality (if motion vectors are available), but I wouldn't expect it to approach DLSS-style upscaling in quality.

The actual structure of the network would also have to be quite different if it were a true temporal reconstruction approach, as there's not just a single frame of pixel data coming in. It would most likely take the form of a recurrent neural net, but the example network configurations they show seem to be pretty straight-forward single-frame upscaling.

But straight-forward single-frame upscaling is the best option for their old library anyway. Going back into old engines to extract motion vectors and implement pixel jittering just wouldn't be worth it, but an approach like they describe in the patent application should be very simple in comparison, to the point where they can cheaply include upscaling in a large portion of their back-catalog.
Thank you for the clarification, that makes sense.

Considering the patent was filed by NERD I definitely agree that its use case is likely primarily in upscaling older games. I wonder if it's even already been used in NSO N64 emulation, considering the overview video for that mentioned the games would be in higher resolution.
 
Thank you for the clarification, that makes sense.

Considering the patent was filed by NERD I definitely agree that its use case is likely primarily in upscaling older games. I wonder if it's even already been used in NSO N64 emulation, considering the overview video for that mentioned the games would be in higher resolution.
it was used for Super Mario Sunshine's videos.

upscaling N64 games seems pretty useless. they're better off digging in and outputting 1080p or something
 
it was used for Super Mario Sunshine's videos.

upscaling N64 games seems pretty useless. they're better off digging in and outputting 1080p or something
Just out of curiosity why do you think it's useless? Can't you get better image quality with AI upscaling versus regular screen/TV upscaling?

Changing the rendering resolution for what's likely to be dozens of games (if not hundreds when they start doing other consoles) seems like a lot of unnecessary work especially considering many of those games might not play nice with higher rendering resolutions. Doing a blanket AI upscale on all of them is probably the best way to go.
 
Interesting, so Ampere would presumably do this much better correct? and I thought this would only be for video streaming and not for scripted sequences in Real Time Graphics like the cutscenes in say… Metroid Dread.

"Better" in the sense of having more options; hardware accelerated playback of video/audio is a bit of a binary 'yes it can/no it can't' as far as I'm aware. (in the absence of hardware acceleration) Software-side/using the CPU on the other hand... well hey, we can imagine how eating up CPU cycles can feel like, right?

Edit: ...reading over it, my wording feels suboptimal. Ok, if you don't have hardware acceleration, you fall back to software-side/using the CPU. Then depending on complexity of the codec, it can eat up a significant number of cycles. If you DO have hardware acceleration, then playing back a video is no sweat for the CPU.

So anyway, according to this:
Yes, the X1 did have the NVDEC. The Yuzu emulator needed to support calls to it to support video playback.

The NVDEC in X1 supports MPEG-2, VC-1, H.264, H.265, VP8, and VP9. Nintendo exposes H.264, H.265, VP8, and VP9.
Checking with wikipedia to be more thorough; Maxwell's NVDEC looks like it supports version 1 of H.265.
Desktop Ampere's NVDEC looks like it gained support for version 2 of H.265 as well as Main profile of AV1.

Don't worry about distinction between V1 and V2 of H.265; it's just the addition of a lot more profiles. And profiles are, from my casual/novice level understanding of the subject, more or less groupings of a codec/standard's bag of tricks. Higher or more advanced profiles include more and more of the codec's tools and tricks, but you're also usually getting diminishing returns. The more commonly supported profiles of a codec should reap most of its benefits.

AV1 decode support is really nice going forward for a couple of reasons.
1. AV1 is part of the generation after H.265. Its ceiling in compression efficiency will accordingly be higher.
2. It's royalty free. I don't know the details, but my understanding is that H.265 adoption never really took off the way H.264 did because H.265 is mired in a significantly worse patent/royalty fees hell. So in practice, it's not unlikely that you'd see a jump from H.264 straight to AV1. That's two generations of codecs there. Which means, massive savings in file size needed for the same perceptual quality.

Of course, the caveat here is that the increase in compression efficiency comes with an increase in encoding complexity. Whenever you look at CPU reviews and glance over the encoding benchmarks, ever notice the difference in performance between H.264 and H.265? That's one generation's worth of increase in complexity. Imagine another on top that. (and as far as I've heard, AV1 in its first couple of years was hideously slow to encode)
Which brings me to an aside:
On one end of the spectrum, random individual like you or me can encode a video on our home computer. Probably using a consumer grade CPU.
On the other end of the spectrum, the likes of Google and Netflix are running these giant ass server farms.
Game developers I assume are somewhere in the middle; so I'm wondering what are they using to encode videos for their games? Workstation class stuff?
 
Last edited:
I don't see the point in upscaling the back catalogue. just going back and increasing the resolution is enough. unless you mean post-postprocessing upscaling, which I'd rather they avoid
Before the OLED Model, I would've doubted that Nintendo would add a post-processing upscaler to the Switch OS. But seeing the OLED Model's vivid (P3) and standard (sRGB) color modes makes me reconsider. It could be another bonus feature for Dane owners to enable/disable per personal preference.
 
0
Just out of curiosity why do you think it's useless? Can't you get better image quality with AI upscaling versus regular screen/TV upscaling?

Changing the rendering resolution for what's likely to be dozens of games (if not hundreds when they start doing other consoles) seems like a lot of unnecessary work especially considering many of those games might not play nice with higher rendering resolutions. Doing a blanket AI upscale on all of them is probably the best way to go.
I don't think it's necessary for N64 games, given the edge complexity of the models being rather low. not to mention the texture resolutions are already low to begin with, blowing up the image is gonna induce blurriness. so starting for a low source resolution from an N64 game is gonna amplify that blurriness. especially since we're not using something like DLSS, we're just using a post-processing upscaler, probably a spatial upscaler or something like FSR.

as for the quantity of games, Nintendo already drip feeds us them already. if they used that drip feed time to increase the resolution of the games through their emulator, then that's better time spent I think
 
0
Capcom looked into RE7 and just weren’t satisfied with what they could deliver in terms of their expectations for the game.

I just don’t believe this, sorry. I believe you that Capcom might have said such things in interviews…but that’s just what most publishers do when they give up on a Nintendo port. They blame the hardware.

I’m glad the Witcher 3 Switch game exists cause it proves there is a reasonable version of RE7 Switch if Capcom cared enough.

In the end, Capcom decided it would take too much dev/time effort to make a Switch version of RE7, so they decided not to and pointed to the hardware as their excuse.

It’s the same as if SE blamed the hardware as to why they couldn’t port a native version of Kingdom Hearts 1&2 HD to the Switch. We know it would be a BS excuse. In the end they just don’t want to bother.

If this was all as straight forward as you believe then we’d be getting a ton of PS4 games on Switch but we don’t because it’s just not realistically feasible to pull it off in a real world environment.

No…and I want to make this clear because this is the whole reason I addressed this in the first place: the ported games the Switch doesn’t get is about the publisher not believing the market is there for their game enough to bother.

That’s it. That’s the primary reason most major multiplats don’t get Nintendo versions.

This is the primary reason cross gen games of any consoles eventually go away. It’s not because the publisher think the older hardware is holding them back creatively…it’s because they think the older userbase isn’t active much any more to bother.

There were TONS of cross gen games on the ps4/one from 2013-2016 that could have easily been ported to the Switch at some point. Most weren’t (even though they had a ps360 version).

Wanna guess why?

95% of the 3rd party multiplat releases from 2012-2015 didn’t get ported to the Wii U…despite the Wii U being perfectly capable of replicating ps360 games.

Wanna guess why?

It’s almost always about the publisher wary about a Nintendo port being worth the effort. Too much possible risk for too little possible reward.

The Switch is no different.

(Well, ok, it’s sort of different because the option of playing home console 3rd party multiplats…portably…could be an X factor to increase sale potential for games that past Nintendo consoles didn’t have going for them. It’s why Skyrim Switch and Witcher 3 Switch even exist at all. They are testing the portability appeal of such games. Trust me, if the Nintendo console released in 2017 wasn’t a console with the ability to play every game portably…you wouldnt see a Skyrim or Witcher 3 or Doom port.)

When developers can do it faster, cheaper and without a lot of technical cut back - then we will see more games from PS4 on Nintendo hardware and that will start with the DLSS Switch.

Porting a ps4 AAA game to the Switch will take just as much time/effort with the Dane/DLSS model existing as it would without.

If Saber had never put Witcher 3 on the Switch and waited to get the DLSS Dane Switch dev kit to start porting their game to that model exclusively…it would still take a year. This is what I’m telling you.

The new Switch revision isn’t going to speed up the dev process or make Switch porting suddenly a breeze. What it will do is change the graphics/performance results of the output (especially when docked)…it will change the perception of the port by some gamers.

And that’s the important thing to publishers who have been hesitant to port stuff to the Switch previously.
 
I just don’t believe this, sorry. I believe you that Capcom might have said such things in interviews…but that’s just what most publishers do when they give up on a Nintendo port. They blame the hardware.

I’m glad the Witcher 3 Switch game exists cause it proves there is a reasonable version of RE7 Switch if Capcom cared enough.

In the end, Capcom decided it would take too much dev/time effort to make a Switch version of RE7, so they decided not to and pointed to the hardware as their excuse.

It’s the same as if SE blamed the hardware as to why they couldn’t port a native version of Kingdom Hearts 1&2 HD to the Switch. We know it would be a BS excuse. In the end they just don’t want to bother.



No…and I want to make this clear because this is the whole reason I addressed this in the first place: the ported games the Switch doesn’t get is about the publisher not believing the market is there for their game enough to bother.

That’s it. That’s the primary reason most major multiplats don’t get Nintendo versions.

This is the primary reason cross gen games of any consoles eventually go away. It’s not because the publisher think the older hardware is holding them back creatively…it’s because they think the older userbase isn’t active much any more to bother.

There were TONS of cross gen games on the ps4/one from 2013-2016 that could have easily been ported to the Switch at some point. Most weren’t (even though they had a ps360 version).

Wanna guess why?

95% of the 3rd party multiplat releases from 2012-2015 didn’t get ported to the Wii U…despite the Wii U being perfectly capable of replicating ps360 games.

Wanna guess why?

It’s almost always about the publisher wary about a Nintendo port being worth the effort. Too much possible risk for too little possible reward.

The Switch is no different.

(Well, ok, it’s sort of different because the option of playing home console 3rd party multiplats…portably…could be an X factor to increase sale potential for games that past Nintendo consoles didn’t have going for them. It’s why Skyrim Switch and Witcher 3 Switch even exist at all. They are testing the portability appeal of such games. Trust me, if the Nintendo console released in 2017 wasn’t a console with the ability to play every game portably…you wouldnt see a Skyrim or Witcher 3 or Doom port.)



Porting a ps4 AAA game to the Switch will take just as much time/effort with the Dane/DLSS model existing as it would without.

If Saber had never put Witcher 3 on the Switch and waited to get the DLSS Dane Switch dev kit to start porting their game to that model exclusively…it would still take a year. This is what I’m telling you.

The new Switch revision isn’t going to speed up the dev process or make Switch porting suddenly a breeze. What it will do is change the graphics/performance results of the output (especially when docked)…it will change the perception of the port by some gamers.

And that’s the important thing to publishers who have been hesitant to port stuff to the Switch previously.
That's not how this works. The effort required for Switch ports from PS4/XB1 us absolutely a result of hardware power. A new Switch that's around that power level would be able to receive ports from those systems with far less resources required.
 
That's not how this works. The effort required for Switch ports from PS4/XB1 us absolutely a result of hardware power. A new Switch that's around that power level would be able to receive ports from those systems with far less resources required.
Not true. Even if the Switch had the same power margins as the PS4/XB1, it would still get "shafted" by publishers who a. do not believe the Nintendo market is part of their primary audience b. are/were moneyhatted by Sony to prevent a Nintendo console port.
 
That's not how this works. The effort required for Switch ports from PS4/XB1 us absolutely a result of hardware power. A new Switch that's around that power level would be able to receive ports from those systems with far less resources required.

No…Saber would have still had to make their engine run on unique Switch hardware. Whether it’s Tx1 or Dane. They still would have had to figure out shortcuts and optimize around restrictions of bandwidth and memory and cpu speeds etc.

The only tangible difference is that Saber would have gotten a lot more bang for their buck waiting and porting Witcher 3 to Dane/DLSS switch than TX1 Switch.

But the time and cost wouldn’t be different.

Let’s, for arguments sake, say CDPR paid Saber $1 million to have 50 devs spend 12 months making the Switch game that released in 2019.

I promise you that it would be basically the same had they decided to finally make a port for the new Switch.

The better hardware doesn’t suddenly make it only cost $500k, take only 25 devs, and only take 6 months of dev time. Thats not how it works.

In the end, the time/effort/cost investment from CDPR is the same. In both cases, they have to weigh whether that investment is worth possibly not even selling a million copies.
 
Not true. Even if the Switch had the same power margins as the PS4/XB1, it would still get "shafted" by publishers who a. do not believe the Nintendo market is part of their primary audience b. are/were moneyhatted by Sony to prevent a Nintendo console port.

You're right that some publishers/developers will never change their mindsets regarding Nintendo and what software to publish on it but Capcom would absolutely release software natively on a more powerful Switch if the hardware is up to snuff.

And the same goes for a bunch of other publishers, but overall the reason for new hardware is never solely intended to third parties but rather for a way to let Nintendo developers flex their muscles as well as revitalize Nintendo product line.
 
0
Not true. Even if the Switch had the same power margins as the PS4/XB1, it would still get "shafted" by publishers who a. do not believe the Nintendo market is part of their primary audience b. are/were moneyhatted by Sony to prevent a Nintendo console port.

Yet, Gamecube had wayyyyyy better "AAA" third party support than the Switch and that system was nowhere near as popular as the Switch. Hardware parity makes a huge difference. The Wii U even had better "AAA" third party support than the Switch, it was a just a very poorly placed concept with terrible marketing.
 
0
Not true. Even if the Switch had the same power margins as the PS4/XB1, it would still get "shafted" by publishers who a. do not believe the Nintendo market is part of their primary audience b. are/were moneyhatted by Sony to prevent a Nintendo console port.
I'm talking about this purely from a technical standpoint.
No…Saber would have still had to make their engine run on unique Switch hardware. Whether it’s Tx1 or Dane. They still would have had to figure out shortcuts and optimize around restrictions of bandwidth and memory and cpu speeds etc.

The only tangible difference is that Saber would have gotten a lot more bang for their buck waiting and porting Witcher 3 to Dane/DLSS switch than TX1 Switch.

But the time and cost wouldn’t be different.

Let’s, for arguments sake, say CDPR paid Saber $1 million to have 50 devs spend 12 months making the Switch game that released in 2019.

I promise you that it would be basically the same had they decided to finally make a port for the new Switch.

The better hardware doesn’t suddenly make it only cost $500k, take only 25 devs, and only take 6 months of dev time. Thats not how it works.

In the end, the time/effort/cost investment from CDPR is the same. In both cases, they have to weigh whether that investment is worth possibly not even selling a million copies.
Of course it will take less time and effort if fewer cutbacks have to be made. It's the optimization that's really the hard part with these "impossible ports", and more powerful hardware simply requires less of that.
 
Of course it will take less time and effort if fewer cutbacks have to be made. It's the optimization that's really the hard part with these "impossible ports", and more powerful hardware simply requires less of that.
I think what sets impossible ports from ones that are more "realistic" are the aspects of the switch hardware that seems nearly insurmountable and how the games ran on other systems. for that reason, I don't think RE7 fits either. it's a 1080p/60fps game on XBO so there's a lot of headroom to be clawed back. and it's a slow paced single player game with very few dynamic actors, so the game doesn't seem CPU intensive

With Capcom, the issue seems to lie in the engine. for some reason, they had to make a bespoke RE Engine (I remember hearing them mention such). that probably kept RE7 off the most
 
0
Video streaming, pre-rendered cutscenes (which are still in use in AAA games, to mask loading times) and (depending what audio hardware acceleration is available with the Orin CPU) any audio files that would benefit from lossless compression.

Ampere is capable of decoding AV1, which is what Netflix uses whenever possible to provide 4K video without obscene download requirements, which means smaller file sizes for pre-rendered video of any size (when AV1 isn't available, it uses VP9).
Tegra X1 also had compressed video hardware acceleration, but was limited to AV1's pseudo-predecessor, VP9, which was good, but not as good as AV1 is.
Not 100% sure what games used for compressed audio on Switch or what is available with the Orin CPU, but it seems likely some form of audio compression was previously used to slim down game package sizes.
"Better" in the sense of having more options; hardware accelerated playback of video/audio is a bit of a binary 'yes it can/no it can't' as far as I'm aware. (in the absence of hardware acceleration) Software-side/using the CPU on the other hand... well hey, we can imagine how eating up CPU cycles can feel like, right?

Edit: ...reading over it, my wording feels suboptimal. Ok, if you don't have hardware acceleration, you fall back to software-side/using the CPU. Then depending on complexity of the codec, it can eat up a significant number of cycles. If you DO have hardware acceleration, then playing back a video is no sweat for the CPU.

So anyway, according to this:

The NVDEC in X1 supports MPEG-2, VC-1, H.264, H.265, VP8, and VP9. Nintendo exposes H.264, H.265, VP8, and VP9.
Checking with wikipedia to be more thorough; Maxwell's NVDEC looks like it supports version 1 of H.265.
Desktop Ampere's NVDEC looks like it gained support for version 2 of H.265 as well as Main profile of AV1.

Don't worry about distinction between V1 and V2 of H.265; it's just the addition of a lot more profiles. And profiles are, from my casual/novice level understanding of the subject, more or less groupings of a codec/standard's bag of tricks. Higher or more advanced profiles include more and more of the codec's tools and tricks, but you're also usually getting diminishing returns. The more commonly supported profiles of a codec should reap most of its benefits.

AV1 decode support is really nice going forward for a couple of reasons.
1. AV1 is part of the generation after H.265. Its ceiling in compression efficiency will accordingly be higher.
2. It's royalty free. I don't know the details, but my understanding is that H.265 adoption never really took off the way H.264 did because H.265 is mired in a significantly worse patent/royalty fees hell. So in practice, it's not unlikely that you'd see a jump from H.264 straight to AV1. That's two generations of codecs there. Which means, massive savings in file size needed for the same perceptual quality.

Of course, the caveat here is that the increase in compression efficiency comes with an increase in encoding complexity. Whenever you look at CPU reviews and glance over the encoding benchmarks, ever notice the difference in performance between H.264 and H.265? That's one generation's worth of increase in complexity. Imagine another on top that. (and as far as I've heard, AV1 in its first couple of years was hideously slow to encode)
Which brings me to an aside:
On one end of the spectrum, random individual like you or me can encode a video on our home computer. Probably using a consumer grade CPU.
On the other end of the spectrum, the likes of Google and Netflix are running these giant ass server farms.
Game developers I assume are somewhere in the middle; so I'm wondering what are they using to encode videos for their games? Workstation class stuff?
Interesting! I didn’t even know that it was the hardware ENC/DEC is one of the things that is used for the pre-rendered cutscenes, a game would need to be coded for this I assume(aka, it is not automatic), and that it’s use is one of the reasons why switch games are pretty small. I also didn’t know that the switch used the other other codecs either, I thought it was absent from consoles.

Having this present in Dane (it will if i has to run Switch 1 games) would really help with the game size limit in my opinion, among other things. Maybe storage isn’t as bleak as it seems. Maybe. It is more complicated than this of course.

Wonder why Nintendo opted to not expose NVENC, probably due to limited hardware resources.

Thank you both for clarifying this! And the features that they have. It helped a lot.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom