• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Not to mention this all is completely ignoring how latency limited Turing and Ampere are as architectures.

Both have massive amounts of stall time when they fill their registers/cache due to how they are designed, AMD Offsets this via larger registers and then Infinity Cache afterwards (NVIDIA just riffing on the latter with Lovelace).

T239 though will have low-latency memory acess, therefore less stalling.

We already know that RT as a task is very latency-sensitive, but if Ampere is really that latency starved, even in pure-rasterized workloads (as suggested by some Chips and Cheese the fourm not the discord breakdowns) , then T239 using LPDDR may trade raw bandwidth for far less stalling in the GPU itself.

Heck, we can sort of see this Latency-oriented mindset elsewhere in Lovelace with features like SER allowing the GPUs to regulate Stall time more by reordering instructions (Primarily targeted towards Ray Tracing which is a very latency-sensitive task) Or in Hopper with them making it fully Asynchronous.

EDIT: Referencing this review mainly as per Dakhil's reply. Editing for visibility mainly, but between the ones of them that I have looked at, latency has been a notable sore spot for Turing/Ampere.
Is low-latency more important compared to bandwidth roughly 5 times as high? For T239, can having more cache for the GPU on the die alleviate not relying so much on the limitations of LPDDR5 bandwidth?
 
Is low-latency more important compared to bandwidth roughly 5 times as high? For T239, can having more cache for the GPU on the die alleviate not relying so much on the limitations of LPDDR5 bandwidth?
Not at all, it's still as bandwidth limited as we've known for a while. However, having low-latency memory should help for another major department.... The CPU, since these always benefit from lower latency compared to GDDR6 and as you'd already assume, the ones from the big consoles are heavily neutered on that area. Nobody's able to accurately measure the gap with this in mind, but if should let them push above their weight despite the relatively low clocks (just like the A57s from the original Switch). A bigger cache should help alliviate bandwidth concerns, but as far as we know it's not any bigger than expected.
 
Is low-latency more important compared to bandwidth roughly 5 times as high? For T239, can having more cache for the GPU on the die alleviate not relying so much on the limitations of LPDDR5 bandwidth?
Latency is more important for CPUs, bandwidth is more important for GPUs. In an ideal world you have very high bandwidth and very low latency, but we don’t live in that world :p

CPUs waste less time and energy if there is less latency to get what they need, ie, no need to wait as much for the train to come into your station, GPUs are highly paralleled and so they cover for the other. But, they need a lot of data, so lots of bandwidth gives them the data that’s necessary.

In consoles, you make a tradeoff and consoles are built around tradeoffs. You can’t have low latency + high bandwidth and vice versa, but in these cases bandwidth is much more important to a console than the latency, because the latency is constant and you can account for it. Bandwidth isn’t so much constant and your engine can make a game require more from the memory and another is less.

So really, the memory bandwidth is what’s important here, not really the latency for the console in question. I should say more important, focusing on something we do not know or can’t see is a waste of time imo, because you can’t know how they’ll work around it.

That isn’t to say you want N64 style latency, but that was a unique and special case and not worth discussing.
 
Latency is more important for CPUs, bandwidth is more important for GPUs. In an ideal world you have very high bandwidth and very low latency, but we don’t live in that world :p

CPUs waste less time and energy if there is less latency to get what they need, ie, no need to wait as much for the train to come into your station, GPUs are highly paralleled and so they cover for the other. But, they need a lot of data, so lots of bandwidth gives them the data that’s necessary.

In consoles, you make a tradeoff and consoles are built around tradeoffs. You can’t have low latency + high bandwidth and vice versa, but in these cases bandwidth is much more important to a console than the latency, because the latency is constant and you can account for it. Bandwidth isn’t so much constant and your engine can make a game require more from the memory and another is less.

So really, the memory bandwidth is what’s important here, not really the latency for the console in question. I should say more important, focusing on something we do not know or can’t see is a waste of time imo, because you can’t know how they’ll work around it.

That isn’t to say you want N64 style latency, but that was a unique and special case and not worth discussing.
I still feel this highly undervalues and undermines how latency at an architectural level is something that can't easily be worked around in the modern day for consoles with how converged they are with PC development.

Especially as we can see the benefits of lower-latency memory to the GPU on PC. Already with things like RT, or RDNA2/RDNA3/Lovelace.

It just comes off as stubborn-minded to say "Nah Latnecy don't matter for consoles, bandwidth 100000% of the time"
 
Are there any cost estimates of UFS 2.2 vs. UFS 3.1 vs. UFS 4.0 at 512 GBs and 256 GBs?

UFS 3.1 and 4.0 are common... Not sure UFS 2.2 still is... Not sure if that affects the cost to a manufacturer, lol.
 
I still feel this highly undervalues and undermines how latency at an architectural level is something that can't easily be worked around in the modern day for consoles with how converged they are with PC development.

Especially as we can see the benefits of lower-latency memory to the GPU on PC. Already with things like RT, or RDNA2/RDNA3/Lovelace.

It just comes off as stubborn-minded to say "Nah Latnecy don't matter for consoles, bandwidth 100000% of the time"
Didn’t I have this discussion before with you about how you do not have any expertise in this field to be making such a claim :p
 
Didn’t I have this discussion before with you about how you do not have any expertise in this field to be making such a claim :p
And do you have the same to say otherwise?

The data that latency matters all around is there and that GPUs can very much benefit from lower latency in modern day also is there.

Not to mention how architectures in the modern day are far less specifically designed to account for things like Bandwidth/Latnecy balance than before.

Heck, Series S|X is a massive victim of this for frak sake.

Ignoring that and saying "But console" sounds even more backchair designer than what I'm saying.
 
Isn't BC the entire reason Nintendo moved to the chipset it's using for the Switch in the first place?

“In the past, we provided a service known as the ‘Virtual Console’ that allowed users to play older video games on new consoles with newer hardware. As long as the hardware remained unchanged, those games could continue to be played. However, the publishing rights to video games are complicated, and we have said that we would only add titles after securing the necessary rights. Of course, video games developed for dedicated consoles were created in different development environments for each console. As a result, when the hardware changed, the development environment could not necessarily be reused, and so the video games that had been released on older consoles could not be played on newer consoles without additional modification. Recently, however, the development environment has increasingly become more standardized, and we now have an environment that allows players to enjoy older video games on newer consoles more easily than ever before. However, Nintendo’s strength is in creating new video game experiences, so when we release new hardware in the future, we would like to showcase unique video games that could not be created with pre-existing hardware.” - Shigeru Miyamoto himself circa '22

I highly doubt NG will lack backwards-compatibility especially with those CPU core specs. In fact, making enhanced patches for Switch games for NG just like the Xbox Series consoles and the PlayStation 5 will be easier than ever - are people forgetting that these chipsets are capable of running modern versions of Android and Android applications? And on the GPU side they're working with NVIDIA.
 
That’s still called backwards compatibility lol.

But yes I’m theory Nintendo could do certain things now that will make it easier to support backwards compatible later. Too early now to be thinking of the successor to Switch successor (we don’t know if it’ll be something completely new, preventing it from being BC with Switch 2)
I'm guessing in the case that NVIDIA's current rendering pipeline (or at least what is designed around Ampere/Turing) is forwards compatible with whatever's next in terms of rendering pipeline for NVIDIA (which is likely enhanced with better path-tracing and possibly more advanced neural rendering features), Switch 2 will likely have some form of forwards compatibility no matter what CPU architecture Nintendo goes with as long as their partnership continues with NVIDIA. By that time, the IPC of ARM chips as well as x86-64 (which I doubt will happen but you never know) chips meant for portable devices will have sufficiently increased to make up for a considerable amount of the translation layer overhead in the case of a CPU architecture change.
 
0
And do you have the same to say otherwise?
Alovon, I will quote you word for word:

“I still feel this highly undervalues and undermines how latency at an architectural level is something that can't easily be worked around in the modern day for consoles with how converged they are with PC development.”

You’re already trying to argue as though you studied and have experienced designing hardware.



I’m going to cast doubt with what you’re saying because that isn’t your specialty. I’ve already told you before I’m not experienced in this field, that doesn’t mean I think it’s right to tell these people right from wrong when A) you’re not a specialist and B) you’re arguing as though you design hardware when you don’t.

And I don’t mean this in a rude way, it just feels wrong and gives people this unrealistic expectation. :p
 
Alovon, I will quote you word for word:

“I still feel this highly undervalues and undermines how latency at an architectural level is something that can't easily be worked around in the modern day for consoles with how converged they are with PC development.”

You’re already trying to argue as though you studied and have experienced designing hardware.



I’m going to cast doubt with what you’re saying because that isn’t your specialty. I’ve already told you before I’m not experienced in this field, that doesn’t mean I think it’s right to tell these people right from wrong when A) you’re not a specialist and B) you’re arguing as though you design hardware when you don’t.

And I don’t mean this in a rude way, it just feels wrong and gives people this unrealistic expectation. :p
I'm just not one for shutting down discussion is my big thing.

And just saying "There's no way latency can help GPU" is different than "I don't think that it will help the GPU too much"

Big difference there in my personal opinion.
 
Are there any cost estimates of UFS 2.2 vs. UFS 3.1 vs. UFS 4.0 at 512 GBs and 256 GBs?

UFS 3.1 and 4.0 are common... Not sure UFS 2.2 still is... Not sure if that affects the cost to a manufacturer, lol.
I have been trying to find this myself but it seems hard to get reliable pricing, the best bet is going to the manufacturer and getting the product code. Then you take the product code and paste it google which links you to Ali Express... yeah a bit sketchy

Using this method I see $58 for a Samsung 256GB UFS 4.0 chip (KLUEG4RHHD-B0G1 ), assuming its not a scam

I also get $45 for a Samsung 256GB UFS 3.1 chip (KLUEG8UHDC-B0E1)

there are some slight differences in terms of the size of the package that change the chip identifier and the prices, but this is probably at least ball park pricing and we can assume when ordering in the millions you are def getting a nice discount to what you see above
 
I have been trying to find this myself but it seems hard to get reliable pricing, the best bet is going to the manufacturer and getting the product code. Then you take the product code and paste it google which links you to Ali Express... yeah a bit sketchy

Using this method I see $58 for a Samsung 256GB UFS 4.0 chip (KLUEG4RHHD-B0G1 ), assuming its not a scam

I also get $45 for a Samsung 256GB UFS 3.1 chip (KLUEG8UHDC-B0E1)

there are some slight differences in terms of the size of the package that change the chip identifier and the prices, but this is probably at least ball park pricing and we can assume when ordering in the millions you are def getting a nice discount to what you see above
with parts like this, you're never gonna get reliable pricing. what you find in stores like this is either binned, leftovers, or overstock. companies purchase units in defined quantities and those are made to order
 
I'm just not one for shutting down discussion is my big thing.

And just saying "There's no way latency can help GPU" is different than "I don't think that it will help the GPU too much"

Big difference there in my personal opinion.
But I didn’t say that lol
 
it was more because they didn't have better options
That's also true but what options do you honestly think they have now? If Nintendo wants to continue with their hybrid approach to the console/handheld market they need to prioritize battery life, in which case ARM wins hands down at the moment, even catching up to architectures like x86-64 in terms of single-core performance year after year. It's the move for NG and potentially even the successor of NG unless they pivot to another type of gaming experience (which, knowing that innovation is somewhat of a priority for Nintendo generation after generation, can absolutely happen).
 
And do you have the same to say otherwise?

The data that latency matters all around is there and that GPUs can very much benefit from lower latency in modern day also is there.

Not to mention how architectures in the modern day are far less specifically designed to account for things like Bandwidth/Latnecy balance than before.

Heck, Series S|X is a massive victim of this for frak sake.

Ignoring that and saying "But console" sounds even more backchair designer than what I'm saying.

I mean the good thing here is that it seems like the Switch 2 will get any benefits of lower latency while still having similar bandwidth-per-TF to the desktop cards. Hopefully, anyway. I just wish that LPDDR5X was more likely.
 
But I didn’t say that lol
It came off like that TBH.

The general point is, I feel like treating using LPDDR is bad versus GDDR is completely missing the narrative on the benefits it can bring and the (apparent) limitations with optimization for consoles in the modern day.

When Insomnaic is the only studio that seemingly got around the limitations of Zen2+RDNA2 without GameCache/Infinity Cache and with GDDR Latency (Without needing to sidestep the issue by just ditching HWRT like Epic did with SW Lumen)

It does sort of show that there are limitations to how much a dev can optimize for the new hardware at this stage.

Heck, even Insomniac sort of side-stepped it by dumping a lot of processes into the CPU utilizing the bandwidth, but Series S|X can't use that.

Now is that a industry problem or a hardware problem, that is a different debate (Although TBH....on Series S|X it's a hardware problem. That split-memory design while keeping GDDR is so dumb)
 
with parts like this, you're never gonna get reliable pricing. what you find in stores like this is either binned, leftovers, or overstock. companies purchase units in defined quantities and those are made to order
I agree. I tried to caveat it but also felt providing some data points was better than nothing
 
0
That's also true but what options do you honestly think they have now? If Nintendo wants to continue with their hybrid approach to the console/handheld market they need to prioritize battery life, in which case ARM wins hands down at the moment, even catching up to architectures like x86-64 in terms of single-core performance year after year. It's the move for NG and potentially even the successor of NG unless they pivot to another type of gaming experience (which, knowing that innovation is somewhat of a priority for Nintendo generation after generation, can absolutely happen).
yes, ARM being the best option given they can make reference designs that scale to many different product segments.

Qualcomm is an option, even if a bit of a longshot. they make good products that's held back by being on Android and their lack of documentation, but that's every android chip maker

AMD could also be an option since they talked about being ready to make ARM cpus if need be
 
@Alovon11 - respectfully, the article you cited doesn't support your claims.

The article specifically says that latency based stalls don't translate into absolute performance loss due to parallelism.

The article also is specifically profiling one shader in one game - Starfield. A game that is both an Xbox console exclusive and an AMD sponsored project. The odds of this shader being optimized for AMD's hardware (with it's large register file and caches) are extremely high.

There isn't any strong indication that lower latency RAM would be sufficient to overcome the problems of register spills or smaller instruction caches.

If none of these things were true - if warp stalls always equaled lost performance, if the Starfield specific stalls were global to the architecture, and if LPDDR5 completely eliminated these stalls - then you're looking at a performance uplift of ~10%. That nice, but it's not game* changing. It's the difference between 500Mhz and 550Mhz in clock speed.

There are lots of reasons to believe that this particular result doesn't generalize. As you point out, memory latency is an issue when register spills or cache misses. The only real look at memory latency on Ampere is also from Chips and Cheese, and it tests the RTX 3090. The 3090 is inefficient by design - it's pursuing performance at all costs, and because of that it is cache starved and register starved relative to the rest of the line. That's like trying to guess a Camaro's gas mileage by watching Kyle Larson**

Which is not to say that latency is irrelevant, but -
Not to mention this all is completely ignoring how latency limited Turing and Ampere are as architectures... Both have massive amounts of stall time when they fill their registers/cache due to how they are designed... T239 though will have low-latency memory access, therefore less stalling.
- is an exceptionally bold claim that is, at best, inappropriately confident.

* lol
** Calling @chocolate_supra
 
Regarding the BC discussion that has been beaten to death by now, I don’t believe a “digital only” version of the Switch 2 forbids the idea of BC. Like…at all.

Yes, Switch cartridges do exist, but I believe I’m correct in saying that every single Switch title has a digital version on the eShop. You don’t need cartridge slots to maintain BC. It just makes it frustrating for collectors, and fans of physical media is all.

But given the current trends in the gaming industry, it is not a question of if physically based media will for all intents and purposes die, but more when it’ll happen.

I don’t believe at this point in time Nintendo would do that at launch for the Switch 2, but as time passes, and the sales, and other incentives encourages those to “switch” to digital, a revision that is digital is more likely than not I believe, and if that version is successful enough, Nintendo could toy with the idea of a digital only system.

PG Gaming has been doing it for years now, and PCs aren’t as different these days from consoles. They’re more similar than they are different.
 
No. I don't have time to go through every game you bring up, by Crysis Remastered doesn't use the same visual settings between the One S and the Series S.

The conversation that you initially replied to was "can NG run Breath of the Wild at 4k60 without any customization." The answer is "probably not." You posted a video of a PC running 4k20 in emulation. That didn't change the answer.


You're right, brute force alone isn't the only factor here, engine changes, settings changes, optimizations are all possible. Could Breath of the Wild run at 4k60 on NG? Yes. Can it do it with just brute force, without DLSS or engine optimizations? No.

That's the conversation we're having. I'm not sure which of us is misunderstanding the other, but I think we're talking past each other.


You're missing doubling the frame rate. The number of pixels that the GPU has to push is 12x. Just because a game might be CPU limited, doesn't mean that the CPU is the only barrier.



Yes, this is exactly what I said. That's why I said "GPU limited games." But it's also why engine optimizations matter. We're only talking about brute force solutions. Again, this was a converation about whether or not the BotW demo was running in backwards compatibility mode or not. I'm not talking about what's possible for a native app on NG, I'm talking about what's achievable through a BC layer.


We know this is almost certainly not true. Sorry.


Nintendo Switch is built for speed - it doesn't use the sorts of technologies that make this possible, because they kill performance. The situation you are describing isn't technically feasible.
Ok, I get it.
From my calculations, Switch to Drake (assuming 1.1GHz docked) is an 8.6x difference. 900p/30 to 2160/60 is an 11.52x difference. You still have to double the amount of pixels per second, which is 2x gpu utilization assuming no bottleneck and perfect scaling

If you double the FPS, you also double the number of pixels being rendered per second. 3.4TF is more than enough for BotW 4K30 if there's no other bottleneck, but you will need a 11.5x jump if you want 60fps too.
Ok, so:

  • 900p@30fps = 43,200,000 pixels per second
  • 1440p@60fps = 221,184,000 pixels per second

Difference is 5,12 times, very below what switch 2 can do.

In the end, the more probably is that Zelda is running at 2K and upscale to 4K via DLSS, like you all say before. The good side of this is a nice Anti aliasing, that lack in this game (or any other big N game these days)

Hope the extra resource that left can be used to things like better models on environment, textures and draw distance, if they really try to make a port/remaster version for NS2.
 
You're right, brute force alone isn't the only factor here, engine changes, settings changes, optimizations are all possible. Could Breath of the Wild run at 4k60 on NG? Yes. Can it do it with just brute force, without DLSS or engine optimizations? No.
I believe Oldpuck's right, game rendering in general is a very idiosyncratic process for the hardware and when you're already pushing almost 12x the pixels per second... Something's got to bottleneck somewhere, and we're assuming it's the GPU here. When putting it like that, it doesn't sound like a big deal but in the theoretical (with a small chance to be real) scenario of BOTW actually running at 4K/60 on T239 without any optimizations made for Ampere-Lovelace or DLSS... We're no longer talking about a generational leap but almost two generational leaps at once (in raw power!), going from the TFLOP numbers we currently have and the inevitable bandwidth limitations, it's definitely hard to believe it can pull if off legitimately.
 
Which is not to say that latency is irrelevant, but -

- is an exceptionally bold claim that is, at best, inappropriately confident.
The problem is that once the Register/Cache Spills out it needs to call to external memory (As per standard memory pipelines)

So yes, LPDDR having a supreme latency advantage versus GDDR would help the issue assuming Register/Cache does get spilled over (Considering the thing only would have 1MB of L2 at this point seemingly, that is likely a yes, it will spill)

The question is how much.

And for note, this isn't saying that LPDDR + Ampere can overcome a 2x Bandwidth Defficet versus the full-rate GDDR6 in Series S.

All it's saying is that when working within the the 102.8GB/s of the LPDDR5 (At least when docked). It would probably have somewhat more stability and less stalling than Ampere as the LPDDR would return a call to the GPU faster than GDDR, ergo less stalling and wafting for memory for operations to resume.

Not even saying it'd make it Lovelace-level in SM/Perf efficency.

Just that it is a thing that should be considered when trying to pin how strong/capable this thing is, and ignoring that comes off as a bit arrogant.

The fact of the matter is you'd still need to design around it to get full optimal use of it, The point however is that if you do make optimal use of it, the returns may be greater than looking at a RTX 3050 Laptop GPU running Cyberpunk.
 
Last edited:
I believe Oldpuck's right, game rendering in general is a very idiosyncratic process for the hardware and when you're already pushing almost 12x the pixels per second... Something's got to bottleneck somewhere, and we're assuming it's the GPU here. When putting it like that, it doesn't sound like a big deal but in the theoretical (with a small chance to be real) scenario of BOTW actually running at 4K/60 on T239 without any optimizations made for Ampere-Lovelace or DLSS... We're no longer talking about a generational leap but almost two generational leaps at once (in raw power!), going from the TFLOP numbers we currently have and the inevitable bandwidth limitations, it's definitely hard to believe it can pull if off legitimately.
Still feel people are overblowing bandwidth TBH.
 
That's like trying to guess a Camaro's gas mileage by watching Kyle Larson**
You mean Kyle Larson who drives the NASCAR Camaro which doesn't share a single nut, bolt, or screw with an actual Camaro, having a Hendrick 5.8L engine (instead of the 6.2L Chevy LT4) and a custom tube frame chassis just with Camaro-shaped stickers on the front and back?

That Kyle Larson?

Yeah I think that tracks with your point 😉
 
Still feel people are overblowing bandwidth TBH.
Afaik it's more of a concern with these hypothetical scenarios of pushing a 900p game to 4K and doubling the framerate without any real optimizations other than moving the game to NVN2. It might not be as much as an issue when native games will be already outputting around 720p (or even below, most likely) but when literally pushing almost 12x the pixels per second, both under framerate and resolution... This is where we might see the T239 fail into its knees, just like the big consoles.
 
Last edited:
0
I'm seeing alot of talk about BC but I haven't seen people discuss whether pre-existing controllers will be compatible with NG. I suspect they will be but depending on what the new Joy-cons (if they are at all) will bring to the table. I think we might see alot of new games not work with the old contoller similar to PS4 controllers don't with PS5 games or even when nintendo added wii motion and you needed an adapter on old wii remotes to play games like Mario&Sonic Olympic games. Any one else have thoughts on this?
 
You mean Kyle Larson who drives the NASCAR Camaro which doesn't share a single nut, bolt, or screw with an actual Camaro, having a Hendrick 5.8L engine (instead of the 6.2L Chevy LT4) and a custom tube frame chassis just with Camaro-shaped stickers on the front and back?

That Kyle Larson?

Yeah I think that tracks with your point 😉
Everybody always beats me to making the oddly specific nerdy posts :mad:
 
Everybody always beats me to making the oddly specific nerdy posts :mad:
vtq3eel3p5-9.png
 
yes, ARM being the best option given they can make reference designs that scale to many different product segments.

Qualcomm is an option, even if a bit of a longshot. they make good products that's held back by being on Android and their lack of documentation, but that's every android chip maker

AMD could also be an option since they talked about being ready to make ARM cpus if need be
Yes Qualcomm is somewhat of a contender for whatever's after NG especially considering their more recent CPU designs like the Snapdragon 8 Gen 2. I guess maybe even to a certain extent Samsung as well.
 
0
Do people realistically believe Nintendo is gonna move on from Nvidia in the long term ?

Not only do their hardware proprietary technology align perfectly with what Nintendo would want out of a portable machine, they also help providing great dev tools which is massive for third party support.

Only way I see that happening honestly is if they just move on from the Switch concept itself thus doing another "reset" of sorts (personally I'm expecting at least some kind of "Switch trilogy" in terms of hardware before something else happens).
 
Do people realistically believe Nintendo is gonna move on from Nvidia in the long term ?

Not only do their hardware proprietary technology align perfectly with what Nintendo would want out of a portable machine, they also help providing great dev tools which is massive for third party support.

Only way I see that happening honestly is if they just move on from the Switch concept itself thus doing another "reset" of sorts (personally I'm expecting at least some kind of "Switch trilogy" in terms of hardware before something else happens).
I dont think anyone seriously thinks that is likely, we're just discussing hypotheticals.
 
Still feel people are overblowing bandwidth TBH.

I agree with this completely, there's so many details we aren't currently aware with Drake. I do believe that memory latency issues are a concern in the industry, If they weren't we wouldn't have companies using expensive solutions like large on die caches. Us trying to get a rough idea of how Drake will perform by comparing to desktop RTX is flawed as well because this will be the very first Ampere based SoC dedicated to gaming.
 
And desktop rtx isn't dedicated to gaming?
Of course it is but in a PC environment, I said the first SoC dedicated to gaming....
We can't possibly make any accurate assumptions that Drake will need the same requirements as desktop Ampere to achieve maximum performance for its specifications.

Even though Ampere is dedicated to PC gaming, I still fully expect Nintendo and its developers to extract far more performance from the hardware than was achieved on desktop.(No different to what was achieved on a very non custom TX1)
 
Last edited:
I'm seeing alot of talk about BC but I haven't seen people discuss whether pre-existing controllers will be compatible with NG. I suspect they will be but depending on what the new Joy-cons (if they are at all) will bring to the table. I think we might see alot of new games not work with the old contoller similar to PS4 controllers don't with PS5 games or even when nintendo added wii motion and you needed an adapter on old wii remotes to play games like Mario&Sonic Olympic games. Any one else have thoughts on this?
I suspect all old controllers will work fine with it, but with two caveats:

The new console will charge Joy-Con, but they won't attach and can't be used in handheld mode.

Some games will require you use the "new gen" controller, but only if they use new controller specific features.

On the whole I expect compatibility wherever possible.
 
0
I feel like it’s safe to assume Joy-Cons and Pro Controllers will work via Bluetooth or USB since it’s incredibly unlikely the console itself will lack any form of Bluetooth (if anything it will be a newer version so that audio over Bluetooth is better) and it’s mostly unlikely the dock will lack at least one USB-A port.

After all, Wii Remotes and Wii Remote extensions were compatible with the Wii U. BC for the Switch controllers is almost a guarantee- people have mostly adapted to the current controller layout of the Switch across most major console environments including niches like Windows handhelds and streaming handhelds and devices like the Steam Deck. The only real difference between Switch and other controllers being the inclusion of analog triggers in most of them if not all, haptic feedback and adaptive triggers in the case of Dualsense, trigger vibrations in the case of the Xbox controllers, and touchpads in the case of the Steam Deck.

Personally I think it would be innovative if for NG the contacts meant for Joy-Cons allow for more throughput of data if you’re using a newer controller/version of the Joy-Con but are still perfectly backwards compatible with original Joy-Cons. The features I think would make sense for a new version of the Joy-Con controllers are definitely haptic feedback and some form of analog triggers. I really enjoy the analog triggers of the GameCube controller with the satisfying travel at the end with the click, I think that would work better for a portable device too. Another neat addition would be a touchpad or two, that would really make NG stand out from the crowd of existing handheld devices as not even devices such as the ROG Ally have a touchpad (even though of course it has a touchscreen). I just think a touchpad would enable the developers at Nintendo to create new types of experiences reminiscent of the type of thing they’d do with games like Flipnote Studio - though they could use the touchpad in more hardcore gaming scenarios as well that require precision.
 
I’ll honestly be disappointed if the NG Switch doesn’t have some kind of NG Joy-con 2.0 with some thoughtful improvements.

I don’t mean it needs new features. I’ll be fine if they didn’t. But after years of these 1.0 versions, I’d like them to make good improvements to the next ones.
 
I’ll honestly be disappointed if the NG Switch doesn’t have some kind of NG Joy-con 2.0 with some thoughtful improvements.

I don’t mean it needs new features. I’ll be fine if they didn’t. But after years of these 1.0 versions, I’d like them to make good improvements to the next ones.
I think for product differentiation alone, it's a safe bet.
 
I feel like it’s safe to assume Joy-Cons and Pro Controllers will work via Bluetooth or USB since it’s incredibly unlikely the console itself will lack any form of Bluetooth (if anything it will be a newer version so that audio over Bluetooth is better) and it’s mostly unlikely the dock will lack at least one USB-A port.

After all, Wii Remotes and Wii Remote extensions were compatible with the Wii U. BC for the Switch controllers is almost a guarantee- people have mostly adapted to the current controller layout of the Switch across most major console environments including niches like Windows handhelds and streaming handhelds and devices like the Steam Deck. The only real difference between Switch and other controllers being the inclusion of analog triggers in most of them if not all, haptic feedback and adaptive triggers in the case of Dualsense, trigger vibrations in the case of the Xbox controllers, and touchpads in the case of the Steam Deck.

Personally I think it would be innovative if for NG the contacts meant for Joy-Cons allow for more throughput of data if you’re using a newer controller/version of the Joy-Con but are still perfectly backwards compatible with original Joy-Cons. The features I think would make sense for a new version of the Joy-Con controllers are definitely haptic feedback and some form of analog triggers. I really enjoy the analog triggers of the GameCube controller with the satisfying travel at the end with the click, I think that would work better for a portable device too. Another neat addition would be a touchpad or two, that would really make NG stand out from the crowd of existing handheld devices as not even devices such as the ROG Ally have a touchpad (even though of course it has a touchscreen). I just think a touchpad would enable the developers at Nintendo to create new types of experiences reminiscent of the type of thing they’d do with games like Flipnote Studio - though they could use the touchpad in more hardcore gaming scenarios as well that require precision.
I doubt higher data throughput would be necessary to add such features, since each Joy-Con connector is, I believe, USB 2.0. Plenty of headroom as regards input, even VR sensors can cope with USB 2.0 speeds.

As for compatibility, I 100% expect backwards compatibility in the connector. Sure they can improve how they ATTACH, but changing the connector means leaving Switch owners bringing over their Joy-Con will be left with no way to charge them on the new system, and I doubt that's acceptable. They can improve them in many ways without completely breaking compatibility. Though as I said, while I expect them to CHARGE Joy-Con fine, I doubt they'll allow you to play in handheld mode with original Joy-Con, due to the likely larger size of device. This is a fairly easy thing to deal with UX wise, don't have a latch for OG Joy-Con on the new console, and pop up a little error message saying "Please attach Joy-Con 2.0 to use Handheld Mode."
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom