StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

A button dedicated to an optional accessory (the camera)?

doubt
I want to remind everyone that the button having "C" written on it is a complete guess, and anything stemming from that is just as wrong as everyone else. Plus if this were to be some kind of function button like "camera" or "connect" or whatever new's been popping up, they would use a symbol like the home button or the screenshot button.
 
I could absolutely see it as a comparable move to the Ring fit given that's only technically compatible with the lite. You still need an entire extra set of joycons and have to then somehow play in tabletop mode. But at this point it feels like this entire train of thought is off topic from the hardware itself and more about the potential software and accessories Nintendo could make/market.


Yes but it gives the top USB C a use case. Nintendo is too cheap to just put in a 2nd USB C if it wasn't for a reason.


2nd screen is too niche and too expensive I can't see it being used for that especially since many 3DS games just used it as a menu.

Also note as the screen is now 8 inch many people will find table top mode sufficient enough, so I can see people playing a game like Ring Fit in table top mode.

As a USB C port can be inserted in either direction that means the camera can be attached facing outwards too. So we can get AR games in portable mode that use the world outside and not just being a selfie angle. With AI and motorised camera inside it can also be smart enough to track a face and body easily so you won't get a zoomed in on your face if it is facing you.


Having a separate accessory for this is a smarter play as its a risky and costly bit of hardware, if it fails it can be ignored except for the few games that support it. It it catches on it can be integrated into the OLED version.
 
I think Nintendo is just determined to not repeat the "limited wired headsets, must use an app to voice chat" controversy of the first Switch, and are providing EVERY possible option to address it. A lot of gaming headsets are USB which could easily be dongled into a USB-C port, and you can't use the dock port for it because then you couldn't use your headphones when either docked, or in kickstand mode.
 
Last edited:
0

My general understanding is that it'll allow devs to re-use the same shader code they develop using these graphics-API, but I hope it'll inevitably allow for specific advantages for Switch 2 development.
Like AMD's GPU on Windows and Xbox, there could be some interoperability between the development of platforms, which could have some net benifit to NVIDIA GPUs and the Switch 2.

(also the blog included the cutest corgi picture ever).
 
I personally don’t mind if the Switch 2 is just a beefed-up Switch, but seeing a unique gimmick would certainly be like a sigh of relief that Nintendo hasn’t let go of that part of themselves.

Unless, of course, their innovation is now software and accessory focused.

Or they’re gonna focus resources on how they can innovate with Switch 3
 
Buttons are group by "intent" the button next to a system button is a system button . It has a letter C ... there are not many options what it can be... camera , cast, connect ...

also wasnt there a magnetometer or something like that in the switch 2? the same that was in the wii u and missing in the switch

The ir camera is very niche too , and now we have 2


this mode would allow devs to split the handheld power between 2 screens... and if you use the top usb c charger while you play , full power.
The problem with it being a letter C is that whichever word that could refer to might be spelled differently in other languages.
 
i have questioning two things now after the leak , if someone has the awnser , can you tell me ?

1. since the joy con are magnetic connect , i question how will work the joy con grip , since it didint leaked like the others ?

2. while the joy cons are not compatible , i can see the switch 1 pro controller being compatible with 2 if there not another tecnology ouside the C buttom , and the C buttom is like not enterily needed , i am right ?
 
0
Gonna give my two cents on a few things:
  • Reverse Wii U streaming would be a disaster. Between latency, video quality, and increased power draw, I can't see a way this works out well. Also, I highly doubt any games would use it for anything besides mirroring, since in docked or handheld mode there's only one screen to be used. If it has the ability to connect remotely to a TV, it would be to mirror the screen.
  • 4nm vs 8nm - I'm leaning towards 4nm simply because it'll reduce power draw, which is something I'm concerned about on this system. It'd be nice if later down the line we can overclock the system, but even then, the reduced power draw will be a strong draw for Nintendo.
  • The USB-C port on top - obviously for peripherals, but possibly for charging devices as well?
 
Really, my wish is that people would just finally get over this whole subject. We've already heard enough about the performance targets. It'll be good. You don't have to care about the irrelevant implementation detail that is a process node.
I just really enjoy process node discussions. Even with the iPhones, I was excited to see what changes 3nm would bring because it became such a hot button geopolitical issue, but then the A17 Pro and M3s were pretty disappointing from an efficiency standpoint. It took until this N3E process for serious efficiency gains to be made, when until a few months before, it might have been reasonable to expect efficiency gains with a 3nm process.
It would be very exciting to see a 4N chip in Switch 2 at a time when Xbox, PS5, and Steam Deck are all on 6-7nm. Is this the first time Nintendo leads on process node?
 
Gonna give my two cents on a few things:
  • Reverse Wii U streaming would be a disaster. Between latency, video quality, and increased power draw, I can't see a way this works out well. Also, I highly doubt any games would use it for anything besides mirroring, since in docked or handheld mode there's only one screen to be used. If it has the ability to connect remotely to a TV, it would be to mirror the screen.
  • 4nm vs 8nm - I'm leaning towards 4nm simply because it'll reduce power draw, which is something I'm concerned about on this system. It'd be nice if later down the line we can overclock the system, but even then, the reduced power draw will be a strong draw for Nintendo.
  • The USB-C port on top - obviously for peripherals, but possibly for charging devices as well?
I think you're largely on the right track, but I don't see how the top USB-C port (if it exists) is "obviously" for peripherals. Like Steam Deck and others, wouldn't it more likely be for ease of charging, with the OPTION to use it for peripherals (which you wouldn't have to move when docking or undocking) such as headphones. An example would be the USB C port on Nintendo Switch Lite. Just for charging, USB headphones and the like, the "dedicated" periphera for it are but USB hubs for charging it or plugging in controllers.
 
Someone actually has to break down at what clocks 8nm 12SM could work at while plausibly getting over 2-3 hours battery life from what looks like a 20 wH battery while an 8 inch LCD display.

I'm on the TSMC 7nm boat.
 
DLSS doesn't 'increase' performance. It eats into performance. The output of DLSS is an image that is upscaled through machine learning but like any other post-processing effect, it has a cost in frame time. If you compare an image at resolution A and another image at the same resolution A with DLSS applied on top, it will take longer to generate.

DLSS if often presented as a way to magically improve performance but what it does actually is to create a serviceable mock-up of a higher resolution image more rapidly than if you had to render it the classic way.
DLSS is a process that can be done concurrently just like how the CPU and GPU can be done concurrently. Different stages in a pipeline. As DLSS is upscaling frame 3, the GPU is rendering frame 4, and the CPU is prepping frame 5. It's not like FSR which has to use the GPU for the upscale. It's using separate hardware. Even if DLSS creates a serviceable mock-up of a higher resolution, it's done by using a lower resolution input, which in turn reduces the load of the GPU, allowing it to render frames faster (if the CPU is faster at prepping), thereby increasing performance.
 
Is this the first time Nintendo leads on process node?
Absolutely not - N64 was cutting edge, including node. Though back then, nodes were much bigger, and generational gains extremely transformative (like 8 to 16 to 32 bit happened in just a handful of node shifts).

Nintendo typically uses the most up to date node they can for a given chip, this was even true on Wii U, where I believe no smaller nodes would have been possible at the time with the chosen CPU architecture (though I may be wrong about that).

Nintendo values efficiency and cost effectiveness - smaller nodes benefit them perhaps the most.
 
Someone actually has to break down at what clocks 8nm 12SM could work at while plausibly getting over 2-3 hours battery life from what looks like a 20 wH battery while an 8 inch LCD display.

I'm on the TSMC 7nm boat.
Simple answer is: no it wouldn't.

Ampere minimum clocks are 420mhz. T239 gpi would use just under 6 watts on minimum clocks (tx1 gpu used about 3).
 
I think PS5 Pro is rumoured to be N4P as well.
You know better than throwing lines like these :)

What is the source?
 
Simple answer is: no it wouldn't.

Ampere minimum clocks are 420mhz. T239 gpi would use just under 6 watts on minimum clocks (tx1 gpu used about 3).

Yeah pretty much what I thought. The people who are pushing the 8nm angle still need to show some math on how that battery is possibly supposed to power that system for even 2 hours because I don't think you can get even 2 hours, maybe not even 90 minutes reliably.
 
I personally don’t mind if the Switch 2 is just a beefed-up Switch, but seeing a unique gimmick would certainly be like a sigh of relief that Nintendo hasn’t let go of that part of themselves.

Unless, of course, their innovation is now software and accessory focused.

Or they’re gonna focus resources on how they can innovate with Switch 3
What if it’s unique gimmick is… no unique gimmick. Technically it’d be unique in terms of Nintendo consoles
 
The problem with it being a letter C is that whichever word that could refer to might be spelled differently in other languages.
Unless of course it's a multifunction system button that interfaces with whatever USB-C device is plugged in at the top (e.g. push to talk with a headset plugged in, recalibrate tracking for camera, etc). In which case a "C" button (which is apparently pure speculation anyway?) is somewhat appropriate and universally understandable.
 
Absolutely not - N64 was cutting edge, including node. Though back then, nodes were much bigger, and generational gains extremely transformative (like 8 to 16 to 32 bit happened in just a handful of node shifts).

Nintendo typically uses the most up to date node they can for a given chip, this was even true on Wii U, where I believe no smaller nodes would have been possible at the time with the chosen CPU architecture (though I may be wrong about that).

Nintendo values efficiency and cost effectiveness - smaller nodes benefit them perhaps the most.
I’m think 5/4nm is now more leading edge and 3nm is cutting edge
 
Yeah pretty much what I thought. The people who are pushing the 8nm angle still need to show some math on how that battery is possibly supposed to power that system for even 2 hours because I don't think you can get even 2 hours, maybe not even 90 minutes reliably.
Given that DF recently wrote that tweet commenting on the leaked photo of the Switch 2 and the small battery with a ''Manage your expectations with the handheld performance'', it seems that their thinking is that its 8nm on a smallish battery and it will just be downclocked to weak performance to make it work.

I don't know why they are that pessimistic though, they literally seem to think anything other than 8nm and massive downclocks is impossible.
 
Ok so you're saying staying on the same class node instead? I thought you meant a new node class for a refresh like the Mariko revision

I'm saying that Samsung's 8nm is a dead end node.
To get any power efficiency/transistor density gains they would then have to redesign the chip onto 4nm or 3nm since 8nm does not use EUV.
(Not to mention Samsung's constant yield issues)

TSMC's 5nm, 4N and N4P are still in the same family, which makes it a much easier process to move their designs over vs what would happen at Samsung.
 
I was under the impression that it was 6nm.
Same here, I don't see much evidence of TSMC N4P, knowing the information about clock speed of the CPU and GPU respectively.
Given that DF recently wrote that tweet commenting on the leaked photo of the Switch 2 and the small battery with a ''Manage your expectations with the handheld performance'', it seems that their thinking is that its 8nm on a smallish battery and it will just be downclocked to weak performance to make it work.
That's been their consistent commentary on the Switch successor yup.
 
Quoted by: SiG
1
Given that DF recently wrote that tweet commenting on the leaked photo of the Switch 2 and the small battery with a ''Manage your expectations with the handheld performance'', it seems that their thinking is that its 8nm on a smallish battery and it will just be downclocked to weak performance to make it work.

I don't know why they are that pessimistic though, they literally seem to think anything other than 8nm and massive downclocks is impossible.
The thing is... Massive downclocks might themselves be genuinely impossible on T239 - the battery may only be a small bit larger in capacity, but the minimum clock for Ampere would land the power consumption above V1, thus placing battery life well below V1. That seems exceptionally, excruciatingly unlikely.
 
DF recently posted on the supposed leaked picture showing the battery chamber. They point out it's small size and say we should therefore keep our expectations about handheld performance in check.

It's interesting because I remember them arguing a while back that the most likely reason for a larger screen was purely a practical necessity: more room for a larger battery because 8nm will be a power hog.

Now we see a smaller battery than they were telling us to expect, yet their conclusion from this is.. what? That it's going to be downclocked even more than already thought on the 8nm? How low can someone expect everything to be downclocked before you ask why Nintendo even invested in this SoC?
 
DF recently posted on the supposed leaked picture showing the battery chamber. They point out it's small size and say we should therefore keep our expectations about handheld performance in check.

It's interesting because I remember them arguing a while back that the most likely reason for a larger screen was purely a practical necessity: more room for a larger battery because 8nm will be a power hog.

Now we see a smaller battery than they were telling us to expect, yet their conclusion from this is.. what? That it's going to be downclocked even more than already thought on the 8nm? How low can someone expect everything to be downclocked before you ask why Nintendo even invested in this SoC?
I enjoy DF but I really don’t put any stock in Rich’s analysis when it comes to the Switch successor.
 
DF recently posted on the supposed leaked picture showing the battery chamber. They point out it's small size and say we should therefore keep our expectations about handheld performance in check.

It's interesting because I remember them arguing a while back that the most likely reason for a larger screen was purely a practical necessity: more room for a larger battery because 8nm will be a power hog.

Now we see a smaller battery than they were telling us to expect, yet their conclusion from this is.. what? That it's going to be downclocked even more than already thought on the 8nm? How low can someone expect everything to be downclocked before you ask why Nintendo even invested in this SoC?
Good point. It shows that they will use whatever argument they can to argue for why 8nm is inevitable. That strikes me as incredibly odd. They seem really invested in downplaying the potential of Switch 2. I think its because they are really set in their view of Nintendo as a company that cares nothing at all about performance, so they can't see any possibility of Nintendo using a 4nm node and having pretty high clocks.
 
I'm saying that Samsung's 8nm is a dead end node.
To get any power efficiency/transistor density gains they would then have to redesign the chip onto 4nm or 3nm since 8nm does not use EUV.
(Not to mention Samsung's constant yield issues)

TSMC's 5nm, 4N and N4P are still in the same family, which makes it a much easier process to move their designs over vs what would happen at Samsung.
Makes sense, would be a little disappointing compared to the refresh we got with the current Switch generation though
 
DF recently posted on the supposed leaked picture showing the battery chamber. They point out it's small size and say we should therefore keep our expectations about handheld performance in check.

It's interesting because I remember them arguing a while back that the most likely reason for a larger screen was purely a practical necessity: more room for a larger battery because 8nm will be a power hog.

Now we see a smaller battery than they were telling us to expect, yet their conclusion from this is.. what? That it's going to be downclocked even more than already thought on the 8nm? How low can someone expect everything to be downclocked before you ask why Nintendo even invested in this SoC?
Can we, please, stop pretending anything they have to say is particularly relevant to us? Just saying, but this isn't the first time they have doubled down in impossibles.
 
DLSS is a process that can be done concurrently just like how the CPU and GPU can be done concurrently. Different stages in a pipeline. As DLSS is upscaling frame 3, the GPU is rendering frame 4, and the CPU is prepping frame 5. It's not like FSR which has to use the GPU for the upscale. It's using separate hardware. Even if DLSS creates a serviceable mock-up of a higher resolution, it's done by using a lower resolution input, which in turn reduces the load of the GPU, allowing it to render frames faster (if the CPU is faster at prepping), thereby increasing performance.
I appreciate your effort in introducing concurrent calculations in the discussion. Still, even if we assume that the whole temporal feedback loop and work done by the neuronal network can be done within one frame time, the GPU has to draw more pixels on the screen than it did without DLSS. And that consumes more time.

So there is no escaping the fact that DLSS eats into frame time. That explanation should generalize to all hardware upscalers but I am no specialist so I won't make that claim.
 
I think the chances of a USB-C port on the top being in the final product are zero. It would look so untidy to have a cord sticking out while it's docked, and I don't see additional locking points for something to be easily attached to the top. You'd expect a magnet locking system on the top too if they were going to attach something. Plus, then you'd be limited to only docked mode for whatever these extra features are. The USB-C spec can carry a lot of data, so if there are other peripherals I'd wager they could go through the dock like everything else without needing their own dedicated port.

The idea that you'd put a camera there, with the presumption that people are putting their docks close enough to their TVs for the camera to be useful is just not well thought through.
 
I unignored you after a year or so just to check if you had started doing reasonable claims. You are now back on my ignore list.
This is a super rude unnecessary post directed towards someone who wasn't even talking to you. Did you really need to do this?
DLSS doesn't 'increase' performance. It eats into performance. The output of DLSS is an image that is upscaled through machine learning but like any other post-processing effect, it has a cost in frame time. If you compare an image at resolution A and another image at the same resolution A with DLSS applied on top, it will take longer to generate.

DLSS if often presented as a way to magically improve performance but what it does actually is to create a serviceable mock-up of a higher resolution image more rapidly than if you had to render it the classic way.
It is pretty understood that DLSS achieves increased performance through reduced resolution and changes the lower res image back to a comparably high resolution image. I don't think anyone here thinks DLSS is free performance. But the increased performance penalty you take on using DLSS is more than made up for by the performance increase you get by lowering the resolution so much.

Not that I'm against explanations of technology but I feel like you're mischaracterizing how people understand the technology. Pretty much everyone understands its a technique that uses frame time. But the end result is you normally gain performance vs native resolution for a serviceable to extremely comparable image (depending on the input resolution and scaling factor).
 
Can we, please, stop pretending anything they have to say is particularly relevant to us? Just saying, but this isn't the first time they have doubled down in impossibles.
I think its because in our terminally online gaming bubble, DF has a lot of credibility, so when they downplay the possible performance of Switch 2 that becomes seen as an established truth, sure none of it will matter when the real thing gets revealed and then released but their view of the Switch 2 leads to some discussions about Nintendo releasing a very low performance old tech Switch 2 on some gaming forums.
 
On the topic of DLSS (I'm sure someone else has had this theory before me); I think the Switch 2 will use a version of DLSS that is more performant but produces slightly worse image quality results. So I think comparisons to the full version of DLSS we have right now on desktop only explain half the story
 
This is mostly a good thing, but the benefits to non-windows platforms will take a while to show up, I imagine, as it's early days yet.

Shaders are programs that run on a GPU. GPUs have their own instruction set that isn't standardized. So when a driver encounters a shader in source code form, it needs to run an internal compiler to get it down to the instructions the GPU will understand.

It turns out that it's really smart to have a sort of in-between step. Instead of going straight from the high level shader programming language to the low level GPU microarchitecture, you take a pit stop at an Intermediate Representation, a halfway point between the two.

There are a lot of advantages to having this intermediate representation, but one of them is that it's smaller and simpler and faster to compile. That means instead of shipping around the whole source code for your shader, which might take a while to compile, you ship this tiny intermediate representation that compiles very fast. Smaller file sizes, faster performance for users.

Another advantage is that occasionally a developer might be able to make smart optimizations that the compiler couldn't figure out on it's own, because the developer knows things about the code that the compiler doesn't. It may be occasionally useful to bypass the high level code that shaders are written in, and get down into something lower level. The intermediate representation works on all different kinds of graphics cards, but is lower level than standard shader language, and offers this ability.

Microsoft has developed a high level shader language, called HLSL, and most vendors (Google, Apple, Vulkan, Nintendo) have all moved over to it. But Microsoft also has their own intermediate representation, DXIL, and the industry has not moved to it. So if you ship a game with both an Mac and a PC version - or a PC and a Nintendo version - you can't ship the same, small intermediate versions of your shaders. Either you ship the full big source code, with no optimizations, or you write your small optimized versions twice.

This will begin to resolve that issue in the industry. It will make PC ports of console games easier, and easier for console games to use optimizations from PC ports.
 
I think its because in our terminally online gaming bubble, DF has a lot of credibility, so when they downplay the possible performance of Switch 2 that becomes seen as an established truth, sure none of it will matter when the real thing gets revealed and then released but their view of the Switch 2 leads to some discussions about Nintendo releasing a very low performance old tech Switch 2 on some gaming forums.
Tell me about it... Frankly I'm at that point of the speculation saga where I have close to zero doubts of what this thing will be able to do. No idea if others are in the same boat as well, but I just want to see the console announced.
 
Simple answer is: no it wouldn't.

Ampere minimum clocks are 420mhz. T239 gpi would use just under 6 watts on minimum clocks (tx1 gpu used about 3).
Where did this info for Ampere's minimum clock come from? I ask because I'd also like to know Maxwell's minimum clock. The thing is, Switch is capable of reducing the TX1's GPU clock down to at 76.8Mhz for Boost Mode. Is that because Maxwell is capable of that but Ampere is incapable of going below 420 Mhz?
 
Good point. It shows that they will use whatever argument they can to argue for why 8nm is inevitable. That strikes me as incredibly odd. They seem really invested in downplaying the potential of Switch 2. I think its because they are really set in their view of Nintendo as a company that cares nothing at all about performance, so they can't see any possibility of Nintendo using a 4nm node and having pretty high clocks.
they are extremely conservative in their speculation, and I don’t think there has to be any sort of agenda behind it.
 
On the topic of DLSS (I'm sure someone else has had this theory before me); I think the Switch 2 will use a version of DLSS that is more performant but produces slightly worse image quality results. So I think comparisons to the full version of DLSS we have right now on desktop only explain half the story
I think that we won't have user control of DLSS and I'll be optimized in a different way for every game. Not everything needs to run at 60 so they'll target different render resolution and DLSS profiles for each of them. I don't think that they'll have only one set of DLSS configuration for every game.
 
they are extremely conservative in their speculation, and I don’t think there has to be any sort of agenda behind it.
They already burned their fingers once also with predicting that the Tegra X2 is going to be in the switch, so they're going the opposite direction now haha. That's pretty much it.
 
0
they are extremely conservative in their speculation, and I don’t think there has to be any sort of agenda behind it.
I definitely see their view of Nintendo as a company influence their view of Switch 2. I mean i don't at all see them being hugely pessimistic regarding other hardware releases, no way they would argue for a worst case scenario for stuff like a PS5 pro before it was revealed for example. I mean with the Switch 2 they haven't even acknowledged the possibility of it being a performative handheld, they have openly spread views about it being pretty far off something like the steam deck for example.
 
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom