StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)


KFEWW3v.png

So although the Cortex-A715 is at least noticeably more power efficient than the Cortex-A710, the Cortex-A715 is still slightly less power efficient compared to the Cortex-A78 (although the Cortex-A715 has a noticeably higher single core score than the Cortex-A78 at ~1.7 W).
 
Does anyone remember an article or tweet about 2 people having a meeting and 1 mentioned a new model/hardware or technology and said Xenoblade (2?) would benefit from it (that it would look good or something like that)? I think it was from 2021 but I can't find that conversation.

I think it was japanese (not chinese) and they had the meeting in a restaurant.
No one?

I'm sad.
 
0
Does anyone remember an article or tweet about 2 people having a meeting and 1 mentioned a new model/hardware or technology and said Xenoblade (2?) would benefit from it (that it would look good or something like that)? I think it was from 2021 but I can't find that conversation.

I think it was japanese (not chinese) and they had the meeting in a restaurant.
I remember, it was a Tweet, but I thought we came to the conclusion that the person knew just as much as we did about the new model.
 
Does anyone remember an article or tweet about 2 people having a meeting and 1 mentioned a new model/hardware or technology and said Xenoblade (2?) would benefit from it (that it would look good or something like that)? I think it was from 2021 but I can't find that conversation.

I think it was japanese (not chinese) and they had the meeting in a restaurant.
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
* Hidden text: cannot be quoted. *
The penultimate quote you highlighted is interesting, it tracks with what Nate and others said but this individual makes it seem like there's been no talk of it for years which immediately makes me think of the Mariko over clock experiments we know Nintendo did.

The user could just be parroting stuff from this forum though.

Famiboards>Twitter>Famiboards
 
@oldpuck Let's say the delay allowed significant changes to the memory controller, e.g. a switch from mobile ram to desktop ram, could said changes improve Drake's single threaded performance and allow a similar hardware profile?
 
No. Desktop ram consumes a lot more power. And you'd lose performance trying to stay in the same power window
Then limit the amount of ram used in handheld mode or downclock it. If that's possible.
 
Then limit the amount of ram used in handheld mode or downclock it. If that's possible.
That's what @ILikeFeet is referring to. In order to get battery life in the place you want it to be, you'd wind up underclocking all your performance wins away. There is a reason that LPDDR exists - it keeps the performance up at lower power usage.

The subtleties of RAM performance aren't my expertise so much (I usually turn to @Look over there for that), but I don't think there is a big win there anyway.

So although the Cortex-A715 is at least noticeably more power efficient than the Cortex-A710, the Cortex-A715 is still slightly less power efficient compared to the Cortex-A78 (although the Cortex-A715 has a noticeably higher single core score than the Cortex-A78 at ~1.7 W).
There aren't a lot of A78C benchmarks that I can find, and the ones that I can are all from Snapdragon 8cx, which has (I think) a 6 Big/2 little cluster, which makes the numbers hard to interpret. But my best guess is as much as 50% perf improvement from A78 for single core. The wide variety of cache configurations that are available on the more recent Big cores makes me wonder how much of variation is hidden in what the vendor chooses for cache size.
 
@Dakhil found this

Embeds for twitter have been disabled. Please refer to our site rules for more information.
Original post: https://tw*tter.com/tweet/status/1618182325367615488


mobile gpus are moving fast. a 1050 was pretty much my expectation, and mobile devices are already getting there
 
Agreed, but it's hard to imagine them not sitting on appealing software if they are deciding to push a potential Switch 2 to late 2024. Given that Nintendo had a great launch year for new hardware with the Switch in 2017, I have a hard time believing they'll drop the ball on appealing software for potential buyers during launch period, especially given they'll want to have existing switch users transition to new hardware.
Apart from this reality which seems obvious on the need to learn the lessons of the launch of the Wii u in terms of software, it seems to me that a year where we will have a new 3D Zelda, and hypothetically the first new Mario for 10 years in 2D, the first Metroid Prime in ages waiting for the 4 even if it's a remaster, or possibly a Pokémon spin off but can only be called very weak in the bubble of an enthusiast forum. It's not the line-up of the century but for an end of life it would a pretty good potential year in the eyes of the general public, in my opinion.
 
Don't worry guys, Nintendo will launch the Switch 2 around the time the PS6 launches, it won't be too far off, just like 5-6 more years!

I am only half joking to cope.
 
0

It seems that the Avalanche developers have been doing interviews and have talked about how they have managed to make a native port to Switch, as it is a more or less technical topic I think it would be of interest.

“The challenges are several. One of them is the graphics, we had to reduce the complexity of the polygons and animations. We had to figure out how to make the effects slower or faster, depending on what we need. Obviously, between the programming, the change of art, they are two different recipes to build the version of the Switch.

The same recipes also apply to the PS4 and Xbox One, the good thing is that if you get to run the game on these consoles, it becomes easier for you to go to the Switch. There are people who don't do that. Going from PS5 directly to Switch is a very big change. We can take everything we've done on the PS4 and Xbox One, and move it to the Switch."
 

It seems that the Avalanche developers have been doing interviews and have talked about how they have managed to make a native port to Switch, as it is a more or less technical topic I think it would be of interest.

“The challenges are several. One of them is the graphics, we had to reduce the complexity of the polygons and animations. We had to figure out how to make the effects slower or faster, depending on what we need. Obviously, between the programming, the change of art, they are two different recipes to build the version of the Switch.

The same recipes also apply to the PS4 and Xbox One, the good thing is that if you get to run the game on these consoles, it becomes easier for you to go to the Switch. There are people who don't do that. Going from PS5 directly to Switch is a very big change. We can take everything we've done on the PS4 and Xbox One, and move it to the Switch."
Porting the port xD
 

It seems that the Avalanche developers have been doing interviews and have talked about how they have managed to make a native port to Switch, as it is a more or less technical topic I think it would be of interest.

“The challenges are several. One of them is the graphics, we had to reduce the complexity of the polygons and animations. We had to figure out how to make the effects slower or faster, depending on what we need. Obviously, between the programming, the change of art, they are two different recipes to build the version of the Switch.

The same recipes also apply to the PS4 and Xbox One, the good thing is that if you get to run the game on these consoles, it becomes easier for you to go to the Switch. There are people who don't do that. Going from PS5 directly to Switch is a very big change. We can take everything we've done on the PS4 and Xbox One, and move it to the Switch."

So the Switch version won’t be far off the PS4 version? That’s quite good, should mean in handheld it will look great! If the game has a long enough tail too they may even release a Switch 2 upgrade.
 
So the Switch version won’t be far off the PS4 version? That’s quite good, should mean in handheld it will look great! If the game has a long enough tail too they may even release a Switch 2 upgrade.
I think it's going to get some kind of expansion, so it's probably going to have post-launch support, including possible improvements for Drake if they match in time.
 
That's what @ILikeFeet is referring to. In order to get battery life in the place you want it to be, you'd wind up underclocking all your performance wins away. There is a reason that LPDDR exists - it keeps the performance up at lower power usage.

The subtleties of RAM performance aren't my expertise so much (I usually turn to @Look over there for that), but I don't think there is a big win there anyway.
Having twice the buswidth of RAM is actually quite impressive as it is (coming from Switch). But since the SD has it, it's expected. I wonder if a 256 bit buswidth is possible to fit in a handheld device, and j wonder what the power draw would be 🤔

@Dakhil found this

Embeds for twitter have been disabled. Please refer to our site rules for more information.
Original post: https://tw*tter.com/tweet/status/1618182325367615488


mobile gpus are moving fast. a 1050 was pretty much my expectation, and mobile devices are already getting there
Not gonna lie, but 1.8 tflops is disappointing. I thought we were already there for a few years. Unless they're talking about 8 watts for the whole SOC and not the GPU alone, then I'd say that's pretty damn power efficient. Especially compared to 1050 pascal from 2016 (75 watts TDP).

I wonder how they compare to Apple though 🤔
So the Switch version won’t be far off the PS4 version? That’s quite good, should mean in handheld it will look great! If the game has a long enough tail too they may even release a Switch 2 upgrade.
So in other words.. it's a cross gen game built for xbone/PS4.

I think it's going to get some kind of expansion, so it's probably going to have post-launch support, including possible improvements for Drake if they match in time.
I hope the Drake version is a port of the PS4 (or x series s) version and not switch. I don't want a DQ 11 scenario where we get a port of a game with reduced polygons and other effects of weaker hardware, but only just higher resolutions.
 
Was this posted here?


But whatever LPDDR5T is (or isn't), SK hynix tells us that they intend to make a proper JEDEC standard of it. The company is already working with JEDEC on standardization of the memory technology, and while this doesn't guarantee that other memory vendors will pick up the spec, it's a sign that LPDDR5T isn't going to be some niche memory technology that only ends up in a few products. This also means that the rest of the pertinent technical details should be published in the none too distant future.

In the meantime, for their initial LPDDR5T parts, SK hynix is going to be shipping a multi-die chip in a x64 configuration. According to the company's PR office, they're producing both 12Gb and 16Gb dies, so there's a potential range of options for package densities, with the 16GB (128Gbit) package being the largest configuration. All of this RAM, in turn, is being built on the company's 1anm process, which is their fourth-generation 10nm process using EUV, and paired with High-K metal gates (HKMG).
Huh, T/Turbo's a new one. Not standardized by JEDEC either. So that makes it analogous to Micron's GDDR6X in a way?
At least not yet. SK Hynix mentioned working with JEDEC to make a proper JEDEC standard out of LPDDR5T.
 
Last edited:
Not gonna lie, but 1.8 tflops is disappointing. I thought we were already there for a few years. Unless they're talking about 8 watts for the whole SOC and not the GPU alone, then I'd say that's pretty damn power efficient. Especially compared to 1050 pascal from 2016 (75 watts TDP).

I wonder how they compare to Apple though 🤔
The source is the same video @Dakhil posted earlier. It's 8W for the SOC, but while running a graphics benchmark, so level of CPU utilization is low. There are also Apple comparisons there.
 
0
So us discussing bandwidth had me deep diving information on the next RTX 40 mobile cards coming out in February 22nd of this year.
I find it interesting that Nvidia with the 4050 laptop (a 40w 96bit bus part) claims that performance of this card will be equivalent to that of the 3070 laptop which is a 120w 256bit bus part.

So whatever the solution Nvidia and Nintendo comes up with, I fully expect that this new device won't suffer the same bottlenecks as the current Switch model. Nintendo is known in the past to use bespoke memory solutions in order to achieve the hardware balance they are aiming for(ie Gamecube over the N64 specs). If the RTX 4050 laptop can achieve parity with the 3070 laptop which has a massive difference in memory bandwidth, come February 22nd when the full details come out about these parts it will be interesting to see if the increased cache is present all the way down the product stack...


Edit: some benchmarks were found on the laptop variants as well
 
Last edited:
So us discussing bandwidth had me deep diving information on the next RTX 40 mobile cards coming out in February 22nd of this year.
I find it interesting that Nvidia with the 4050 laptop (a 40w 96bit bus part) claims that performance of this card will be equivalent to that of the 3070 laptop which is a 120w 256bit bus part.
Lovelace has giant cache increases - Infinity Cache sized, if not entirely in design - and still has higher bandwidth than AMD. Orin also used a cache increase over base Ampere, but less than Lovelace. It'll be interesting to see where Drake winds up on both.

I'd take the Nvidia performance claims with a grain of salt - they have a tendency to not do equivalent settings in those cases, turning on DLSS 3 when comparing to RTX 30 series cards. I suspect the like-for-like performance efficiency is closer to 2x
 
Then limit the amount of ram used in handheld mode or downclock it. If that's possible.
GPU clocks can be boosted when docket because it scales well with resolution. Nothing else scales the same way.

If CPU clocks differ between handheld and docked, devs would optimize for handheld, leaving docked performance underutilized, or they would have to write distinct CPU tasks for each mode, resulting in different physics, audio handling, game logic, etc.

If the amount of RAM or its frequency differ, they again would have underutilized docked performance, or they would have to optimize the assets for each mode and change all the subsystems to use less RAM.

If devs fully optimize for each mode, it would result in two different games maybe even with distinct graphics.

With the Switch, only the GPU clocks vary, so devs can write or port their games only once.
(I believe RAM bandwidth varies slightly as well, but it's offset by the lower resolution).
 
Lovelace has giant cache increases - Infinity Cache sized, if not entirely in design - and still has higher bandwidth than AMD. Orin also used a cache increase over base Ampere, but less than Lovelace. It'll be interesting to see where Drake winds up on both.

I'd take the Nvidia performance claims with a grain of salt - they have a tendency to not do equivalent settings in those cases, turning on DLSS 3 when comparing to RTX 30 series cards. I suspect the like-for-like performance efficiency is closer to 2x

Yep I fully get the part about the massive increase in cache and yes with Lovelace Jen-Hsun Huang has been a bit extra about where the new cards live in reality.
 
Last edited:

KFEWW3v.png

So although the Cortex-A715 is at least noticeably more power efficient than the Cortex-A710, the Cortex-A715 is still slightly less power efficient compared to the Cortex-A78 (although the Cortex-A715 has a noticeably higher single core score than the Cortex-A78 at ~1.7 W).

Interesting, so A715 dropping 32-bit support and having some stuff redesigned to claw back efficiency wasn't able to get all the way back to A78 levels. Also the V/F curve seemed to get shifted to the right a bit? So for the power range of something like the Switch, the A78's still ideal. It's probably when you go up to something like laptop range where the A715 becomes preferable.
@oldpuck Let's say the delay allowed significant changes to the memory controller, e.g. a switch from mobile ram to desktop ram, could said changes improve Drake's single threaded performance and allow a similar hardware profile?
So as ILikeFeet mentioned, it's a loss in perf/watt. Because regular DDR needs more energy to push each bit, so it'll always lose in that regard. (energy per bit-wise, LPDDR should beat GDDR, then GDDR should beat regular DDR)
Aside from that, those DDR DIMMs/sticks are probably physically too big for the Switch form factor? (ah, but why not SO-DIMMs, you might ask. Apparently they hit a wall at 6400 MT/s)
If there were any advantages in performance, regular DDR probably should be a bit better in latency (I think that as latency goes, historically, regular DDR beats LPDDR, then LPDDR beats GDDR)? But probably not by enough to offset the energy efficiency loss. You're also not necessarily winning in raw bandwidth either.
As theoretical bandwidth goes, DDR5 is currently standardized up to 7600 MT/s. LPDDR5 goes up to 6400 MT/s and 5X up to 8533 MT/s. Yes, you as an individual consumer can buy DDR5 sticks that go beyond 7600 MT/s. They're not standardized. Device manufacturers therefore will not make promises as to support those higher speeds. You go beyond JEDEC speeds, you are technically on your own.
Also, memory controllers aren't up to snuff quite yet, I think? Intel, as of Raptor Lake, doesn't guarantee Gear 2* beyond 5600 MT/s. I'm less knowledgeable about AMD's side of things, though I am aware that that the 'sweet spot' for Zen 4 is 6000 MT/s, which suggests a limitation either with the memory controller, the Infinity Fabric, or both.
Random aside: I actually am curious why memory controllers for LPDDR seem to handle higher speeds than DDR. Does that historical trend of having somewhat worse latency mean that there's less stress, relatively speaking? Does it help that LPDDR modules tend to be soldered and have less distance to cover leading to having an easier time with maintaining signal integrity? (that's the issue with SO-DIMMs, thus leading to the new CAMM standard)


*so in a few posts related to RAM, I've mentioned things like "Gear 1" and "Gear 2". Maybe I should explain what I'm referring to.
With RAM, you see speed in MT/s; MegaTransfers per Second. Why 'Transfers'? Remember that DDR standards for 'Double Data Rate'; you Transfer data twice per clock cycle. Ergo, the actual frequency that the ram stick runs at is half of the MT/s. So 6400 MT/s is 3200 Mhz.
Historically, the memory controller by default attempted to run at the same frequency as the ram. Recently (as of Rocket Lake in 2021, I believe), Intel introduced new terminology and referred to that as 'Gear 1'; IMC (Integrated Memory Controller) running at ram speed divided by 1. 'Gear 2' would then be the IMC running at ram speed divided by 2. The plus side is that this allows for the usage of faster RAM sticks for more bandwidth. The downside is the IMC running slower tanks your latency. It actually ends up such that, on average (and for typical consumers, at least), with currently existing software, with DDR4, you're better off with the higher end of what you can maintain in Gear 1, than going further with Gear 2, as latency is just that important in the present. With DDR5, Intel doesn't do Gear 1. I think that the options are Gear 2 and 4 (so yes, ram speed divided by 4). I'm less clear with AMD, but typically the concern there is pushing not too hard to accidentally end up desynching with the Infinity Fabric and causing that to lower itself down to half speed.
Unfortunately, I have no idea how the latency of DDR5 in Gear 4 would compare against LPDDR.
 
Just as DLSS 2 evolved over time, very likely DLSS 3 will happen the same thing, is there a chance of it being viable to transform 30 FPS into 60 FPS with quality, who knows even 15 > 30? Currently as analyzed by DF, the real gains are when we aim for high frame rates that exceed 60.
 
Just as DLSS 2 evolved over time, very likely DLSS 3 will happen the same thing, is there a chance of it being viable to transform 30 FPS into 60 FPS with quality, who knows even 15 > 30? Currently as analyzed by DF, the real gains are when we aim for high frame rates that exceed 60.
the problem right now is the input latency that comes with low frame rates. they could definitely do 30>60, just see the cyberpunk rt overdrive trailer

 
the problem right now is the input latency that comes with low frame rates. they could definitely do 30>60, just see the cyberpunk rt overdrive trailer
Just to add to that - the latency problem is unsolvable. DLSS 3 requires that you buffer a frame, DLSS 3 always adds a frame's worth of latency. As the original frame rate goes down, the latency added goes up. So 30->60 gives you an extra 33.3ms of latency.
 
0
Interesting video on framerate independent mouse movement as a way of alleviating the lag from frame generation.
 
0
I am humbly asking Nintendo to release the Switch 2 so we can get Nintendo games with Hi Fi Rush-level IQ.

I was typing up something similar when I saw it but stopped myself

I want so much for framerate and resolution to largely be concerns of the past, and for Nintendo’s releases I genuinely believe it’s one ‘generation’ away. To think it might’ve been this year…
 
I was typing up something similar when I saw it but stopped myself

I want so much for framerate and resolution to largely be concerns of the past, and for Nintendo’s releases I genuinely believe it’s one ‘generation’ away. To think it might’ve been this year…
That's yet to happen even for the home consoles. I give it until the PS6 for real time raytracing rendering with proper hardware, that's about as next gen as we can go.
 
The Xbox Series X and Series S are being sold at a loss and both are doing pretty horribly, lol.

Getting doubled by up by the PS5 in sales now that the PS5 shortage is over.

I think these were the worst holiday season sales for Xbox since like 2005.
sorry backtracking a bit. I thought Series S was doing well due to wider availability. I know MS doesn't like to announce hardware sales and haven't done so in a long time. Do you have a source on this claim that they've had the worst Xbox hardware sales since 2005?
 
That's yet to happen even for the home consoles. I give it until the PS6 for real time raytracing rendering with proper hardware, that's about as next gen as we can go.

This is why I used Hi-Fi as the example, and specified “Nintendo’s releases”; It more closely aligns with Nintendo’s ambition with tech.

I’d bet that most of Nintendo’s first party titles will prioritize 30-60fps and 1440p-2160p (ofc. with DLSS), over trying to apply real-time RT. There will be exceptions, but I still think 1080p ought to be the new floor for their most ambitious works (e.g. Monolithsoft).
 
This is why I used Hi-Fi as the example, and specified “Nintendo’s releases”; It more closely aligns with Nintendo’s ambition with tech.

I’d bet that most of Nintendo’s first party titles will prioritize 30-60fps and 1440p-2160p (ofc. with DLSS).
That wasn't true for the Switch, exactly. It really depends on the game, but Nintendo is going to prioritize low resolutions and 30 FPS as long as it'll run, we're yet to know how many Nintendo franchises will deserve the massive budget to squeeze Drake as well. Zelda and 3D Mario are easy guesses.
 
That wasn't true for the Switch, exactly. It really depends on the game, but Nintendo is going to prioritize low resolutions and 30 FPS as long as it'll run, we're yet to know how many Nintendo franchises will deserve the massive budget to squeeze Drake as well. Zelda and 3D Mario are easy guesses.

Mario Odyssey targeted 900p 60fps; Smash Ultimate, MK8DX and Splatoon 3 all targeted 1080p 60fps. All this on hardware that’s only a half-step above industry standard from 15-18 years ago. They clearly care about these things, and if they’re bothering to bring DLSS on board I see 60fps and even higher resolutions being applied as a priority.

And you’re right - they’ll probably target 30fps for Zelda, but they’ll also target 2160p, and that’s going to look absolutely crisp. It doesn’t really go against anything I said in my original post.
 
Last edited:
MonolithSoft developed a temporal upsampling solution to upscale Xenoblade 3 to 1080p (even downsampling in handheld mode), and there's a recent patent related to Zelda to render transparent objects with less processing load.

When the tech is available I think Nintendo will shoot for the moon and present their prestige first-party titles with plenty of eye candy. DLSS giving them high-res and saving them processing power is such a good fit.
 
0
I would say an announcement is dependent on release target. If release is late 2024, then I think they hold any announcement until 2024. If release is first half of 2024, then an announcement this year is well within reason.
what do you think the percentage to see the new Switch in 2023 - 2024 - 2025 ?
any chance to see the system in 2023 or is it close to 0% ?
 
Mario Odyssey targeted 900p 60fps; Smash Ultimate, MK8DX and Splatoon 3 all targeted 1080p 60fps. All this on hardware that’s only a half-step above industry standard from 15-18 years ago. They clearly care about these things, and if they’re bothering to bring DLSS on board I see 60fps and even higher resolutions being applied as a priority.

And you’re right - they’ll probably target 30fps for Zelda, but they’ll also target 2160p, and that’s going to look absolutely crisp. It doesn’t really go against anything I said in my original post.
DLSS is another question altogether, though. The thing with 1080p in the successor is that you might actually struggle to notice games will be running in 540/720p internally, so... Yes, having to heavily leverage DLSS at first place literally means pushing the system hard for eye candy, which Nintendo has always cared about as much as they say otherwise.

Also, some of the games you listed after Odyssey actually ran in dynamic 900p, not even holding that most of the time.
 
Smash and MK8DX are 1080p, Splatoon is dynamic 1080p.
My MK8DX copy looks blurrier than my 1080p footage on Cemu (same TV), which tells me it's actually 900p (there were articles about it on release). Splatoon is dynamic and uses FSR upscaling on top of it, which looks quite blurry as it's the 1.0 version. Switch is barely running anything at the targets Nintendo intended, I have no reasons to expect Drake to actually run games in native 1080/1440p later down the line. Luckily, DLSS will save us.
 
My MK8DX copy looks blurrier than my 1080p footage on Cemu (same TV), which tells me it's actually 900p (there were articles about it on release). Splatoon is dynamic and uses FSR upscaling on top of it, which looks quite blurry as it's the 1.0 version. Switch is barely running anything at the targets Nintendo intended, I have no reasons to expect Drake to actually run games in native 1080/1440p later down the line. Luckily, DLSS will save us.
I'm going with Digital Foundry's analyses.
 
I'm going with Digital Foundry's analyses.
Same, but unless the Switch upscaler is crappier than that of a modern PC GPU... Not buying it with my gear, at least. Splatoon 3 was said to use FSR and dynamic sub-1080p on release still, reported to go into the 540s sometimes. Xenoblade 3 is 540p and unstable 30s on docked mode, which is pretty crazy for the "Nintendo performance>fidelity" narrative out there.
 
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom