• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

According to Nintendo's website, it'll be available "at the My Nintendo Store and select retailers." You can find it on Nintendo's website (in the US at least) here, where it seemingly has indeed replaced the Gray color. Looking around briefly, I also found it on Best Buy's website.
Technically it looks like the gray SKU hasn't been on that page since December or January. A few other bundles were listed before that, then it changed to only show the neon, and now the Mario one has been added.
 
Assuming we have an 8 Core A78 set-up, do we expect 1 core to be reserve for OS/networking and background tasks? Would retaining 4 A57s for OS and BC make sense? What's the best guess based on the nvidia hack?
If they retain the A57 cores, that wouldn’t be for the BC, but for other things that are more minor I think. Phone SOCs these days have their main CPU clusters, but have dozens of other cores that do other tasks. Like managing the 5G modem for example. The Exynos with the RDNA has a small set of A53s in it but it doesn’t function or meant for the same thing as the the main cluster.
 
Unique terrasatiaztion forms for some Pokemon
Is that really a farfetch’d (pun intended) assumption to make when the previous games offered unique forms to some Pokémon, whether cosmetic or not?


I don’t personally find the Unique Terra forms to be that, uh, farfetch’d tbh. It just seems like a safe assumption based on previous Pokémon trends of gimmicks and features.
 
Usable VR costs $550 and Sony may be selling that at a loss.

The Switch 2 will not be built around VR, lol.

I believe Nintendo can sell an HMD in 2025 for $199. They can 'reuse' the joy-cons (that switch owners will already have), so you just need an IR LED addon. The HMD can have one single 4K LCD panel with 3 fixed IPD positions (like Quest 2) and use fresnel lenses (we are moving to pancake lenses that are much better but also more expensive). The rest is not that expensive (4 cameras for tracking; IMU; audio output; USB-C or HDMI). I would only add a battery (so the HMD wouldn't use the console's battery, but it would be optional...).

This can offer a decent VR experience at an accessible price. VR already got to a point where cheap hardware can be good enough, the problem is the library. And that's where Nintendo can make a stride.
 
Last edited:
It will not retain legacy cores, it would not constitute a considerable benefit and would mean squeezing the most out of it would incur addtional complexity and much higher power consumption. We KNOW it doesn't have "legacy cores", anyway, thanks to said leak.

It's almost certainly an 8 core, single cluster, of A78C. Not multithreaded, so it can't do the "Half a core to OS, half a core to game" thing.

We can't say for certain how much the OS and Applets will use, I would suspect 1 core, since 1 Drake core is so much more powerful than 1 TX1 core. That leaves 7 cores for games, which is comparable to other consoles even if it lacks multithreading.

For BC mode, now, I'm NOT a professional programmer, I can't write assembly, I don't study these things, but what I would ASSUME would be 3 cores used for the "game", like how a virtual machine can be assigned specific cores and memory limits, would with the other cores doing misc. tasks to help with the translation/virtualisation. 7 cores of A78C should not have ANY problems virtualising an environment where software compiled for just 3 A57 cores can run.

While it isn't quite as simple as "it's just ARM at the end of the day"; it's also... just ARM at the end of the day, and newer ARM cores are broadly compatible, almost entirely, with older ARM cores.

There is absolutely zero value in retaining a57 cores. The os wouldn't (and shouldn't) be designed to require that.

No. since Armv8.2-A, which the Cortex-A78 supports, is backwards compatible with Armv8-A, which the Cortex-A57 supports. (The Cortex-A78C, which is likely used in T239 (Drake) based on one of Nvidia's Linux commits mentioning the T239 having "...eight [CPU] cores in a single cluster", and the Cortex-A78C is the only CPU in the Cortex-A78 family that supports up to 8 CPU cores per cluster, with the Cortex-A78 and the Cortex-A78AE only supporting up to 4 CPU cores per cluster, adds support for Armv8.6-A, which Arm says has been developed with backwards compatibility in mind.)

If they retain the A57 cores, that wouldn’t be for the BC, but for other things that are more minor I think. Phone SOCs these days have their main CPU clusters, but have dozens of other cores that do other tasks. Like managing the 5G modem for example. The Exynos with the RDNA has a small set of A53s in it but it doesn’t function or meant for the same thing as the the main cluster.
Thanks for the responses. I guess I'll refine my question a bit more.

Assuming it's one A78C core reserved for the OS, and as noted, no multi-threading so no half core allotments,would that single A78C be enough to handle what's required (e.g. snappy OS, rollback netcode for fighting games, video recording/streaming etc), let's assume closcks are similar to OG Switch (1Ghz ) up to 50% greater 1.5 Ghz on the retail units. Or would having a dedicated cluster of cores just to handle OS/networking make more sense ?
 
Usable VR costs $550 and Sony may be selling that at a loss.

The Switch 2 will not be built around VR, lol.
I would expect a better version of what they tried with Labo, but yeah, VR definitely won't be the main selling point
I think this all comes from the struggle to accept that just doing a switch 2 is for once the most realistic option and the best option for nintendo.
Same, though I do wonder if Nintendo would want some way to distinguish the Switch 2 from the Switch 1.
 
No. since Armv8.2-A, which the Cortex-A78 supports, is backwards compatible with Armv8-A, which the Cortex-A57 supports. (The Cortex-A78C, which is likely used in T239 (Drake) based on one of Nvidia's Linux commits mentioning the T239 having "...eight [CPU] cores in a single cluster", and the Cortex-A78C is the only CPU in the Cortex-A78 family that supports up to 8 CPU cores per cluster, with the Cortex-A78 and the Cortex-A78AE only supporting up to 4 CPU cores per cluster, adds support for Armv8.6-A, which Arm says has been developed with backwards compatibility in mind.)
True that ARMv8.2-A is BC with ARMv8-A, but hasn't backwards compatibility with ARM been around since ARMv7 There's still some things required, like 32-bit support if an app runs on that.
 
I believe Nintendo can sell an HMD in 2025 for $199. They can 'reuse' the joy-cons (that switch owners will already have), so you just need an IR LED addon. The HMD can have one single 4K LCD panel with 3 fixed IPD positions (like Quest 2) and use fresnel lenses (we are moving to pancake lenses that are much better much more expensive). The rest if not expensive (4 cameras for tracking; IMU; audio output; USB-C or HDMI). I would only add a battery (so the HMD wouldn't use the console's battery, but it would be optional...).

This can offer a decent VR experience at an accessible price. VR already got to a point where cheap hardware can be good enough, the problem is the library. And that's where Nintendo can make a stride.

I mean, could they literally do that? I guess so, it would just be pretty shitty.

Mobile hardware doing VR without foveated rendering sounds like a really bad idea.

VR is very unpopular already, trying to sell a low resolution VR that runs at less than 90 FPS will probably be even more unpopular.
 
Thanks for the responses. I guess I'll refine my question a bit more.

Assuming it's one A78C core reserved for the OS, and as noted, no multi-threading so no half core allotments,would that single A78C be enough to handle what's required (e.g. snappy OS, rollback netcode for fighting games, video recording/streaming etc), let's assume closcks are similar to OG Switch (1Ghz ) up to 50% greater 1.5 Ghz on the retail units. Or would having a dedicated cluster of cores just to handle OS/networking make more sense ?
A dedicated cluster for the OS would be wasted silicon and wasted energy for a games console.

One performance core (which is what the A78C is) has more, well, performance, than multiple smaller cores, and does it more efficiently.

This isn't a phone where it can have dozens of apps running in a super low power state at once on a secondary cluster. This is a games console where most of the CPU will be used most of the time anyway, so using a performance core for the OS doesn't reduce system efficiency (and can have benefits for... You know. System performance.)

Nintendo Switch's firmware, Horizon OS, is, in my opinion, a pretty impressive piece of software. I am not aware of a single microkernel OS that has reached this sort of mass market adoption. It is an incredible, built-for-gaming OS that is both fast and efficient. But that also means it's made for the hardware, rather than having hardware made for it, which is a good thing for a gaming OS. One core of A78C is more than enough for everything the OS needs to handle in the realm of I/O, memory and thread management, and the homescreen, with performance to spare to run Applets. When it runs an Applet, it'll run the OS and the Applet on the same core with no trouble at all, the A57 can already do that, and in some Applets, it can do that without stuttering.

When a game is running, and an Applet isn't, that Applet performance can be queued up to do other tasks, like upload and download cloud saves, download games, and so forth. It's already designed so that it can do all of this in a linear fashion on one core.

What benefit would spreading this across multiple cores provide? I can't see a single one.
 
I mean, could they literally do that? I guess so, it would just be pretty shitty.

Mobile hardware doing VR without foveated rendering sounds like a really bad idea.

VR is very unpopular already, trying to sell a low resolution VR that runs at less than 90 FPS will probably be even more unpopular.
The most successful VR project to date has been Google Cardboard.
If you compare the Wii with Kinect or PS Move it was leagues behind, but even so it was the most successful platform, for two reasons, library and price.
VR today suffers chronically from very high prices and very poor libraries, companies believe that targeting enthusiasts by making a $500 kit is a good idea, which is why no one outside of a select niche adopts the technology.
If Nintendo sells a VR bundle that uses the console's own screen plus a pair of lenses, and uses the joycons themselves as controls, obviously technologically it will be far behind the competition, but if it runs Mario Kart in VR in a minimally decent way , this will take over the whole market.
 
Technically it looks like the gray SKU hasn't been on that page since December or January. A few other bundles were listed before that, then it changed to only show the neon, and now the Mario one has been added.
Ah, good to know. I (mistakenly) suggested it replaced the Gray SKU because I noticed that the button to select the bundle was gray, which led me to assume that it had taken the Gray SKU's spot. Interesting that the Gray SKU hasn't been available for a while. It's the color I bought at launch and the one I'd be buying if I were to buy a V2 Switch now, so it makes me a bit sad it was phased out at the My Nintendo Store. I wonder if this was determined based on popularity, and if so, if it was more or less popular than the Neon SKU.
 
0
I mean, could they literally do that? I guess so, it would just be pretty shitty.

Mobile hardware doing VR without foveated rendering sounds like a really bad idea.

VR is very unpopular already, trying to sell a low resolution VR that runs at less than 90 FPS will probably be even more unpopular.

Well, Quest 2 sold ~20kk. It was the best opportunity for many to try good VR for the first time.

Really, I don't see the games for Quest 2 being like shit. I believe Nintendo can make amazing VR titles for Drake.
 
0
. I personally think 720p is enough and it would save on cost at launch. They can push a 1080p screen with a Switch 2 Pro or whatever. If it launches with 720p OLED, i think chances of a serious VR attempt will be diminished, though They may try Labo 2 with Switch 2
 
The most successful VR project to date has been Google Cardboard.
If you compare the Wii with Kinect or PS Move it was leagues behind, but even so it was the most successful platform, for two reasons, library and price.
VR today suffers chronically from very high prices and very poor libraries, companies believe that targeting enthusiasts by making a $500 kit is a good idea, which is why no one outside of a select niche adopts the technology.
If Nintendo sells a VR bundle that uses the console's own screen plus a pair of lenses, and uses the joycons themselves as controls, obviously technologically it will be far behind the competition, but if it runs Mario Kart in VR in a minimally decent way , this will take over the whole market.

VR gaming at under 90 FPS looks really bad and the only way the Switch 2 is hitting 90 FPS in VR without foveated rendering is if the resolution is just incredibly bad, lol.

I do not think BotW VR or Mario Odyssey VR was very successful despite being very cheap for the Switch 1.

If Nintendo wants to do VR something with the Switch 2, the best thing would be to make third-party headsets compatible with the Switch 2.
 
0
Thanks for the responses. I guess I'll refine my question a bit more.

Assuming it's one A78C core reserved for the OS, and as noted, no multi-threading so no half core allotments,would that single A78C be enough to handle what's required (e.g. snappy OS, rollback netcode for fighting games, video recording/streaming etc), let's assume closcks are similar to OG Switch (1Ghz ) up to 50% greater 1.5 Ghz on the retail units. Or would having a dedicated cluster of cores just to handle OS/networking make more sense ?
rollback wouldn't be handled on the OS core. that's for the game to manage
 
There's still some things required, like 32-bit support if an app runs on that.
32-bit support shouldn't be a problem with the DLSS model* since the Cortex-A78C does have 32-bit support. But assuming Nintendo wants backwards compatibility with the Nintendo Switch for the DLSS model*'s successor, 32-bit support could be a problem for the DLSS model*'s successor. But of course, Nintendo and Nvidia should have plenty of time to figure out a way to deal with that potential problem.

Assuming it's one A78C core reserved for the OS, and as noted, no multi-threading so no half core allotments,would that single A78C be enough to handle what's required (e.g. snappy OS, rollback netcode for fighting games, video recording/streaming etc), let's assume closcks are similar to OG Switch (1Ghz ) up to 50% greater 1.5 Ghz on the retail units. Or would having a dedicated cluster of cores just to handle OS/networking make more sense ?
A dedicated cluster for the OS would be wasted silicon and wasted energy for a games console.

One performance core (which is what the A78C is) has more, well, performance, than multiple smaller cores, and does it more efficiently.

This isn't a phone where it can have dozens of apps running in a super low power state at once on a secondary cluster. This is a games console where most of the CPU will be used most of the time anyway, so using a performance core for the OS doesn't reduce system efficiency (and can have benefits for... You know. System performance.)

Nintendo Switch's firmware, Horizon OS, is, in my opinion, a pretty impressive piece of software. I am not aware of a single microkernel OS that has reached this sort of mass market adoption. It is an incredible, built-for-gaming OS that is both fast and efficient. But that also means it's made for the hardware, rather than having hardware made for it, which is a good thing for a gaming OS. One core of A78C is more than enough for everything the OS needs to handle in the realm of I/O, memory and thread management, and the homescreen, with performance to spare to run Applets. When it runs an Applet, it'll run the OS and the Applet on the same core with no trouble at all, the A57 can already do that, and in some Applets, it can do that without stuttering.

When a game is running, and an Applet isn't, that Applet performance can be queued up to do other tasks, like upload and download cloud saves, download games, and so forth. It's already designed so that it can do all of this in a linear fashion on one core.

What benefit would spreading this across multiple cores provide? I can't see a single one.
And I think there could also be an increased amount of latency with an additional cluster of CPU cores dedicated for the OS since that's one more additional piece of hardware that needs to be communicated with.

So Chips and Cheese did an interesting and informative analysis of Van Gogh, the Steam Deck's APU. And there are definitely some interesting tidbits from Chips and Cheese's article that could also be applicable to the DLSS model*.

Caching​

Like Renoir, Van Gogh's CCX only has 4 MB of L3 cache. Desktop and server Zen 2 variants feature 16 MB of L3 cache per CCX, which helps insulate the cores from slow memory and generally improves performance.

Valve is doing something funny in their OS, because the L3 is basically missing from a latency test with default settings. Setting the scaling governor to performance rather than the default schedutil makes something resembling a L3 show up, but performance is still very poor. L3 performance is reasonable under Windows, indicating it's not a defect in the APU.
steamdeck_latency_variation.png

Van Gogh's L1 and L2 caches perform just as you would expect from any Zen 2 CPU. Like with Renoir, we see 4 cycles of L1D latency, and 12 cycles of L2 latency. Both mobile chips see a big L3 capacity deficit compared to a high end desktop Zen 2 implementation.
steamdeck_latency.png

A small L3 hurts Renoir, but Van Gogh sees a special level of pain because LPDDR5 latency is abysmal. Even servers these days don't take 150 ns to get data from memory. Unlike the L3, results remained consistently poor even in Windows 11. We're probably seeing a serious issue with the memory controller rather than OS power saving weirdness.
steamdeck_mem_1.jpg

The four LPDDR5 chips are laid out next to the APU, allowing for short trace lengths compared to a DDR4 (SO)DIMM setup

The Ryzen 7 4800H laptop tested was equipped with DDR4-3200 22-22-22-52. Those JEDEC timings aren't as tight as what you might find on a typical desktop DDR4 kit, but the 4800H still shows off a much better latency result than Van Gogh. I guess something had to give for the L part of LPDDR5.

However, LPDDR5 again turns in a disappointing performance. Certainly, a variety of factors mean that getting full theoretical bandwidth out of any DRAM configuration is a pipe dream. For example, you'll lose memory controller cycles from read-to-write turnarounds and page misses. But 25 GB/s is on the wrong planet.

I expected better performance out of a 128-bit LPDDR5-5500 setup. The chips themselves are rated for 6400 MT/s, meaning that theoretical memory bandwidth is totally wasted from the CPU side.
steamdeck_bios.jpg

Nice 5500 MT/s there (in the BIOS). Shame we can't get that bandwidth from the CPU

To put more perspective into just how bad this is, Renoir's DDR4-3200 setup beats Van Gogh's by a massive margin. That applies even when I used process affinity to limit my test to a single CCX. 25 GB/s is something out of the early DDR4 days. For example, a Core i5-6600K can pull 27 GB/s from a dual channel DDR4-2133 setup.
steamdeck_mt_bw_compared.png

The LPDDR5 setup therefore saddles the CPU with garbage memory latency, while providing bandwidth on par with a DDR4 setup out of late 2015. It's not a huge step up from a good DDR3 setup either. All that is made worse by the CPU's small L3 cache, which means the cores are less insulated from memory than they would be on a desktop or server Zen 2 implementation.

For even more perspective, we can look at memory bandwidth usage in Cyberpunk 2077. The game was run with raytracing off, allowing framerates to hover around 100 FPS. I’m using undocumented performance counters, but I’ve tested by pulling a known amount of data from memory and checking to make sure counts are reasonable.
3950x_cp2077_bw.png

Even desktop Zen 2 with 16 MB of L3 would find itself needing more than 25 GB/s. Less L3 capacity means even higher memory bandwidth demand. Van Gogh is clearly not optimized to get the most out of its CPU cores. It's starting to feel like a smaller console APU, where engineering effort is focused on the GPU side, rather than the CPU.

DRAM latency is again terrible. RDNA 2's memory latency typically compares well to GCN-based architectures. But LPDDR5 latency is a nonstop shitshow, and the GPU side is not immune. Thankfully, bandwidth is much better. With a GPU bandwidth test, the LPDDR5 controller finally redeems itself and achieves something close to what it should be capable of on paper. With over 70 GB/s of bandwidth, the Custom GPU 0405 gets a massive bandwidth lead over Renoir's iGPU. That lines up with Van Gogh being a gaming focused product.
steamdeck_gpu_bw.png

GPUConfigurationMemory BandwidthBandwidth Per FLOP (Bytes Per FLOP)
AMD Custom GPU 0405512 FP32 Lanes at 1.6 GHz, 1.64 TFLOPs128-bit LPDDR5-5500, 88 GB/s0.054
AMD Radeon 7 (Renoir)448 FP32 Lanes at 1.6 GHz, 1.43 TFLOPs128-bit DDR4-3200, 51.2 GB/s0.036
AMD Radeon RX 6900 XT5120 FP32 Lanes at 2.5 GHz, 25.6 TFLOPs256-bit GDDR6-16000, 512 GB/s0.02
Xbox Series X GPU3328 FP32 Lanes at 1.8 GHz, 12.1 TFLOPs320-bit GDDR6-14000, 560 GB/s0.046
To counter this, the Steam Deck's LPDDR5 setup provides compute-to-bandwidth ratios comparable to that of consoles. Massive memory bandwidth means there's no need for an Infinity Cache. It further means the GPU should have plenty of bandwidth available even when it's sharing a memory bus with the CPU. Finally, Van Gogh's memory bandwidth strategy provides an interesting contrast to desktop GPUs, which are using larger caches instead of massive VRAM bandwidth. It looks like DRAM technology is still good enough to feed the bandwidth demands of small GPUs without incurring massive power costs.

steamdeck_mem.jpg

Samsung's LPDDR5 chips mounted on the Steam Deck board

If we lock desktop RDNA 2 to the same clocks, we see Van Gogh behaving very similarly right up to L2. There, Van Gogh's smaller L2 is actually faster at matched clocks. Checking a smaller L2 is probably easier. A small GPU like the one on Van Gogh also has fewer L2 clients, further simplifying things and allowing for latency optimizations.
steamdeck_gpu_latency_vs_n21.png

Past the L2, we can see desktop RDNA 2's Infinity Cache. It's prominently missing on Van Gogh. But again, Van Gogh relies on massive memory bandwidth rather than caches, so it doesn't need an extra level of cache.

CPU to GPU Link Bandwidth​

High LPDDR5 bandwidth has other uses too. DRAM acts as a backing store for transfers between the CPU and GPU. On an integrated GPU, more DRAM bandwidth means faster copies between CPU and GPU memory.
steamdeck_link.png

Van Gogh sees excellent transfer rates between the CPU and GPU. It's a lot faster than Renoir, which is limited by DDR4 bandwidth. And, it's faster than the RX 6900 XT, which is limited by PCIe 4.0. However, this impressive performance won't be too important in a gaming platform. PCIe bandwidth doesn't have a significant effect on gaming performance until you get to extremely slow configurations. It's nice to have for compute applications that offload some work to the GPU, and then do some CPU-side processing on the results before the next iteration. But that's not what Van Gogh is made for.
I hope Nintendo and Nvidia allocate the max amount of L3 cache, 8 MB, to the Cortex-A78C cores on Drake. And I do wonder if Nintendo's and/or Nvidia's decision to allocate 1 MB of L2 cache to Drake's GPU could prove to be a major bottleneck.
* → a tentative name that I use
 
Last edited:
That clock ramp time is... something.
Alright, so Chips and Cheese did a look at the time needed for a bunch of different CPUs to go from idle to high/max here. Typically, 1-2 digit milliseconds. So Van Gogh stretching the ramp up time over high 3 digit ms certainly stands out.

The latencies are surprising; wonder what's going on there. Wonder if they can get their hands on a hacked Switch for some testing of the LPDDR4? :unsure:
Or a 4700S to see how a CPU works with GDDR.

We're not the only people on the internet looking at bandwidth:compute ratio, yea!
 
32-bit support shouldn't be a problem with the DLSS model* since the Cortex-A78C does have 32-bit support. But assuming Nintendo wants backwards compatibility with the Nintendo Switch for the DLSS model*'s successor, 32-bit support could be a problem for the DLSS model*'s successor. But of course, Nintendo and Nvidia should have plenty of time to figure out a way to deal with that potential problem.
I was speaking of it in general. The Drake/A78C will have it, but after that, I have my doubts. If the successor's successor were meant to also be backwards compatible all the way back to the Switch, they'd either have to have made patches/updates to the affected games to bring them to 64-bit, or basically emulate/translate the code.
 
I was speaking of it in general. The Drake/A78C will have it, but after that, I have my doubts. If the successor's successor were meant to also be backwards compatible all the way back to the Switch, they'd either have to have made patches/updates to the affected games to bring them to 64-bit, or basically emulate/translate the code.
Given where emulation is now, that will be the easiest solution
 
Is that really a farfetch’d (pun intended) assumption to make when the previous games offered unique forms to some Pokémon, whether cosmetic or not?


I don’t personally find the Unique Terra forms to be that, uh, farfetch’d tbh. It just seems like a safe assumption based on previous Pokémon trends of gimmicks and features.
A later update clarified that the new forms were part of a new mechanic.

Not saying it isn’t just a guess, but just reiterating that there was more there than was in the Presents, other than the graphics update.
 
0
The latencies are surprising; wonder what's going on there. Wonder if they can get their hands on a hacked Switch for some testing of the LPDDR4? :unsure:
Chips and Cheese's article on Cannon Lake, which uses LPDDR4, shows the LPDDR4-2400 controller for Cannon Lake having a 32.71 ns latency increase over the DDR4-2400 controller for the Intel i7-7700K.
cnl_latency_ns.png

That's definitely a far cry from the LPDDR5-5500 controller for Van Gogh having a 105.19 ns latency increase over the DDR4-3200 controller for the Ryzen 7 4800H!
 
Is that really a farfetch’d (pun intended) assumption to make when the previous games offered unique forms to some Pokémon, whether cosmetic or not?


I don’t personally find the Unique Terra forms to be that, uh, farfetch’d tbh. It just seems like a safe assumption based on previous Pokémon trends of gimmicks and features.
It's not the most out there guess ever, but it's also not the most safe assumption either. In previous generations, the Pokémon specific variants of the battle gimmick were already established in the base pair. Later entries mostly just built on what was already there rather than introducing entirely new mechanics, save for box legends getting special mechanics adjacent to the gimmicks (and even that's not guaranteed, but this isn't the place for discussing how Sword and Shield is structurally a weird transitional generation). Actually introducing the species-specific stuff where it wasn't present before is kind of a new thing, even if the end state resembles past generations.
 
Nanoseconds though. A nanosecond is a millionth millisecond. So this doesn't look like a lot.
That still represents a nearly 100% increase in latency, and, forgive me if I’m wrong, but the huge number of memory operations performed means these nanoseconds add up. It becomes a choke point.

We’re dealing with millions of operations per second, and they need to be fed. At that scale, having to wait 155ns vs 70ns to load up your cache makes a difference. And Van Gogh apparently not utilizing its L3 cache at all makes this worse.
 
Last edited:
Was this posted already? Apparently that's from the Backer Kit survey for Armed Phantasia.

2023-03-03-15-25-21-Survey-Armed-Fantasia-Penny-Blood-Backer-Kit-Mozilla-Firefox.png
It's just the same thing as Eyuden Chronicles did years ago. From their kickstart page:
Should no new hardware be released/announced during this time and/or we discover that we are unable to port both games to the Nintendo hardware available at the time, then we will reach out to backers who requested Nintendo as their platform of choice to discuss alternatives.

Kinda funny how the speculative Switch successor is higher on that list than Xbox Series X.

;D
It's a JRPG, Drake version has big chances of being the best selling one if it's made and XSX is 99.9% certain to be the worst selling. The strange thing is Steam coming first in a list about physical media.
 
Kinda funny how the speculative Switch successor is higher on that list than Xbox Series X.

;D
I love my Xbox Series X, but really, the next Switch is likely to be far more successful, especially in Asia.

Unless they fuck up the battery life, this thing will mop the floor in much of Asia. A portable console that can run... Almost anything? And plug into the 4K TV?

It's like, a definitional emerging-middle-class device, just like Switch was in 2017. While the established middle classes in Japan, Singapore and Hong Kong already adore the cocept.
 
In recent weeks there has been a proliferation "Switch 2" themed blogs, tweets, podcasts, videos... But if there were one that gives new elements compared to what we have already known for so many months... 🙄
 
I think MVG's arguments are very much flawed. Nor do I think backwards compatibility is as difficult as MVG implied.

I was just about to post that, but I wanted to watch more of the video. He seems to be very dismissive of Nintendo planning ahead and instead is going to have to play catch up and not all games will get that push forwards.

I don't see a case that Nintendo not mandating that the Switch games would be compatible with later upgraded systems going down the road.
 
Wouldn't they only need an emulation layer for the GPU-side if they needed to use an emulation solution for how the firmware/shaders are handled, rather than a full emulation of the entire system? And that's assuming that's the route Nintendo and Nvidia goes, they may have a better solution than what we're aware of.
 
0
The fact that Nintendo is working so closely with Nvidia on this platform makes me not terribly concerned about BC. They'll figure it out.
 
If higher resolution is one of their selling points and 4K isn‘t possible for every old game, people are gonna ask why. I think they have to give an explanation for that. Maybe they will not have a special name for it, just saying something like 4K not supported, but I guess we‘ll see. Though I do think they will talk about terms like 4K and that, I mean they called their most recent Switch OLED Model.
Nope. No explanation is needed. The words “UP TO” are there. They do a lot of heavy-lifting. Also, it isn’t the priority of every developer to have a 4K resolution, and I’ve never seen anybody ask for explanations on PS/XBox, where the gaming community has a tendency to overshoot their capacities and lowball those of Nintendo. We’ll be fine with 720p, 900p, 1080p, and 1440p. 4K is nice, it could be literally one game in the entire library, and “UP TO” would still be correct… but it's not a dealbreaker, and most people don’t care that much for it.
 
I think Nintendo want the transition from Switch to Drake to be as smooth as possible, and no BC would be very costly to folks. Especially after dragging out this generation as long as possible, and considering just how high software sales have been with the console, it'd be asinine to leave all that on the table.
MK8DX is showing no sign of stopping sales, and Nintendo would just let that dry up?

I wonder if he is trolling at this point.

sowing doubt is the easiest form of clickbait

Why do we have to assume being skeptical means trolling or clickbait? Come on, let's not be juvenile.
 
Why is there so much uncertainty over BC with Switch? Didn't the PS5 and Xbox Series fave similar hurdles when they jumped to a new GPU generation?

I think it's just general lack of faith in Nintendo to prioritize it. The same as people being uncertain about NSO carrying over.

We are at the uncertain mercy of Nintendo here, but I am certain about one thing, the bad reaction that would occur if BC wasn't there.
 
I think it's just general lack of faith in Nintendo to prioritize it. The same as people being uncertain about NSO carrying over.

We are at the uncertain mercy of Nintendo here, but I am certain about one thing, the bad reaction that would occur if BC wasn't there.
We can assume Nintendo is aware of this, which is why MVG's skepticism feels unwarranted.
 
I think it's just general lack of faith in Nintendo to prioritize it. The same as people being uncertain about NSO carrying over.

We are at the uncertain mercy of Nintendo here, but I am certain about one thing, the bad reaction that would occur if BC wasn't there.
MVG is a Switch Dev and is a well known in the hacking/dev community. It's very curious he chose this hill.
 
I think Nintendo want the transition from Switch to Drake to be as smooth as possible, and no BC would be very costly to folks. Especially after dragging out this generation as long as possible, and considering just how high software sales have been with the console, it'd be asinine to leave all that on the table.
MK8DX is showing no sign of stopping sales, and Nintendo would just let that dry up?





Why do we have to assume being skeptical means trolling or clickbait? Come on, let's not be juvenile.
Because its obvious to most people that

1. Nvidia is capable of implementing a BC solution for their own hardware.

2. Nintendo wants BC on the next system.
 
I think it's just general lack of faith in Nintendo to prioritize it. The same as people being uncertain about NSO carrying over.

We are at the uncertain mercy of Nintendo here, but I am certain about one thing, the bad reaction that would occur if BC wasn't there.
There's no uncertainty with NSO, they've explicitly said their account systems going forward will carry over to their subsequent platform. They've said this like, dozens of times.

They've hinted at the same thing regarding BC but AFAIK haven't said so explicitly.
 
Why is there so much uncertainty over BC with Switch? Didn't the PS5 and Xbox Series fave similar hurdles when they jumped to a new GPU generation?
MVG acknowledges Nintendo/Nvidia are capable of delivering BC if they want and even list many ways to do it.

But they're just being pessimistic about the successor being strong enough to emulate Switch like the Deck and Nintendo willing to invest into overcoming the hurdles to have the full library, cause studies says BC isn't that important, and they may just patch NSO apps plus a part of the library and call it a day.
 
Given nVidia will be helping Nintendo, if not working on it themselves for the major part, with BC, i don't think we should be any worried about BC.

Imo, this is a case of "only a stingy budget could prevent it", and while Nintendo's never shy of confusing decisions, this isn't going to be one of them.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom