• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Nothing. No new dock. Why would they? They have a perfectly good, well ventilated, well designed dock with 4K output they haven't used yet. I fail to perceive a reason why they would design a dock with 4K output, superior ventilation and intentionally stripped USB 3 support (thus allowing USB 3 lanes to be used for video out, something the current Switch does not need.) all to not make use of it. It can dump 35W directly into a system with peak charging of 18W. Why is the dock equipped to super fast charge a device that can't do it?

Too many features, too many nips and tucks and unnecessary improvements. In a revision where they cost optimised the system motherboard to within an inch of its life the dock got bigger and more capable in a way I can only see as to future proof it. Why would they introduce a new, 4K compatible HDMI cable? Why is the slot physically larger?

A new system, of course, would make this all make sense. Market the Switch OLED with the white dock and the Switch [REDACTED] with the black dock. You get a lot of good things out of this arrangement. Consumer good will from compatible accessories, a practice Nintendo often does, with different generations sharing AC adaptors and AV equipment all the time. It means simplified production since you only need to manufacture one set of cables and docks for every console on the market without dropping your popular last gen device. It means familiarity for consumers. It means brand consistency.

The Dock isn't some cornerstone of marketing, the black Dock with LAN Port isn't even marketed. It has the better ventilation and superior I/O the next generation needs. Why not embrace it, and with it, the economies of scale and consumer good will that comes with it?


If there is a new dock, there's a LOT of limitations to consider. Some things can't be externalised. Fans, for instance, you'd have one fan inside the system and another larger fan on the outside forcing air in against the internal fan. That just doesn't work. If the internal fan can't keep up when the external fan provides no advantage, if the internal fan can keep up then the external fan is providing no advantage. It's like those USB fans you can get that clip on the back of a home console, useless and potentially a failure point. If the console is running in TV mode, full blast, and gets pulled from the dock, its internal fan needs to be able to deal with full throttle TV mode in that case anyway.
I expect them to use the same internals but use a different design in the "new Dock"
whether that's just a slightly different shape... or a different color... or a completely different shape...
 
BotW could not rely on quick loading of assets in and out of ram because it was limited to the disk drive speeds. The bottom line is that the team has had years to optimize the engine for Switch and I think a lot of people will be surprised just how much extra the team was able to squeeze out of the Switch compared to BotW.
I suspect that this new potential to be directed in a gameplay direction rather than fundamentally improving the base visuals. Which, for me, is fine!
 
0
You are right, Konami is not a good example. When I talk about trying hard, I think it applies very well to EA. Without even talking about the games they might release (and I'm not thinking about Battlefield either), the treatment of FIFA's releases on Nintendo Switch is ridiculously lazy and quite disrespectful. I mean, it's not like they aren't releasing the game on Nintendo Switch. And it's not like it's not selling -it's selling very well on Switch-. And yet EA is making absolutely no effort.

Metroid Prime remastered is a wonderful gaming experience, but even third-party studio have also shown that where there's a will there's a way. And when I point to Capcom, it's precisely because I don't think Monster Hunter ( Rise is a brillant technical success on Switch) is their only license that can work commercially on a Nintendo system.
EA seemingly has a grudge with Nintendo going back to the Wii U era.
 
I expect them to use the same internals but use a different design in the "new Dock"
whether that's just a slightly different shape... or a different color... or a completely different shape...
That would introduce unnecessary production changes and would require some internals to be redesigned.

They already have a Dock with LAN Port in black, but only advertise the Dock with LAN Port in white. If they want their new colour, there it is.
 
0
That's pretty unrealistic. Especially when talking about the memory bandwidth constraints

*assuming you mean a 4x increase in native pixels
I wouldn't call it unrealistic, though maybe a little unlikely. I think the most likely setup will be 720p in handheld mode and 1080p in TV mode, upscaled (by DLSS or otherwise) to 1080p and 4K respectively.
 
0
Is there any chance that docked mode could produce 4x as many pixels (as compared to 2.25x for the current Switch) or is that simply too big of a jump for most chips when given more electricity.
On 8nm, Orin's NX lowest clock, before the power curve utterly bottoms out, is 420MHz. In it's "power usage be damned" mode, it goes up to 1301MHz before thermal throttling kicks in. That's less than 3x.

In theory, Ampere can go higher in terms of clocks, with RTX 3090 Ti having a boost clock of 1860MHz, and obviously, you could push that as long as you could dissipate the heat. And if Drake is on a different process node, potentially that changes the power curve.

But in practice I think the power curve might get higher but not wider, meaning that docked mode would rise too, and that the NX shows a much larger cooling system than is practical in a Switch. So 4x is probably past the physical bounds of the device.

Edited to add: yes, I am assuming you mean "natively rendered pixels of shared fidelity and features". I'm sure it can create an all black, natively rendered, 4K image quite nicely :)
 
On 8nm, Orin's NX lowest clock, before the power curve utterly bottoms out, is 420MHz. In it's "power usage be damned" mode, it goes up to 1301MHz before thermal throttling kicks in. That's less than 3x.

In theory, Ampere can go higher in terms of clocks, with RTX 3090 Ti having a boost clock of 1860MHz, and obviously, you could push that as long as you could dissipate the heat. And if Drake is on a different process node, potentially that changes the power curve.

But in practice I think the power curve might get higher but not wider, meaning that docked mode would rise too, and that the NX shows a much larger cooling system than is practical in a Switch. So 4x is probably past the physical bounds of the device.

Edited to add: yes, I am assuming you mean "natively rendered pixels of shared fidelity and features". I'm sure it can create an all black, natively rendered, 4K image quite nicely :)

This discussion has come up before on the topic of where could we be performance wise if on a different node from Ampere.
I think then as well most of us were in agreement that it's not a win for them to achieve 4-5 TFLOPS of theoretical (docked) performance if the overall balance of the design doesn't allow them to fully take advantage of such power.

I'm definitely interested if [Redacted] is on TSMC's 4N of the energy efficiency possibilities over something like the Steamdeck, which will be it's biggest comparison at the moment. Memory bandwidth and how to reasonably get around it will definitely be the biggest hurdle to overcome for portable like devices until the next memory solution evolution...
 
0
Is there any chance that docked mode could produce 4x as many pixels (as compared to 2.25x for the current Switch) or is that simply too big of a jump for most chips when given more electricity.
Whether they have a 720p screen (super sampled) or 1080p screen, I do expect some switch ports to be native 1440p to 4k, or with LSS.

The 4x pixel difference isn't what worries me. It's more like the overall power gap between handheld and docked mode, and bandwidth constraints for switch 2 games, and the potential hassle of multiple profiles among switch and switch 2 games for Nintendo and devs. Will be interesting to see if we get multiple performance modes for most games.

Handheld 720p can be super sampled if the screen is 720p, and some of the GPU power can be focused on graphical detail and/or framerate..

I can see smash 4 ultimate/Mario kart 8 deluxe/botw render at 720p or 1080p native on handheld, and potentially native 4k (without DLSS use) in docked mode. I wonder how much RAM bandwidth will be used (if there will be a third RAM profile for running switch games), since 60-88GB wouldn't be needed to run them at 720p-1080p..
 
Is there any chance that docked mode could produce 4x as many pixels (as compared to 2.25x for the current Switch) or is that simply too big of a jump for most chips when given more electricity.

I feel like the problem with this is that any chip capable of running 4x better can also run 2x better for minimal increase in power requirement (i.e. Increasing the ceiling for the chip also increase the floor)
 
Hey all, been gone since last Tuesday. Checked the OP and threadmarks, but did I miss much from last week? Saw something about the Switch OLED can technically output in 4K60 if some lanes are soldered with the DisplayPort 1.2? (But, are we talking outputting games in 4K or just theoretical streaming content? Because I thought the OLED dock could already stream 4K content)
 
I feel like the problem with this is that any chip capable of running 4x better can also run 2x better for minimal increase in power requirement (i.e. Increasing the ceiling for the chip also increase the floor)
The pixel gap between the Switch 2 and modern TVs being at least 4x and very possibly 9x is honestly pretty awkward for a hybrid system, we'll see what they come up with.
 
Hey all, been gone since last Tuesday. Checked the OP and threadmarks, but did I miss much from last week? Saw something about the Switch OLED can technically output in 4K60 if some lanes are soldered with the DisplayPort 1.2? (But, are we talking outputting games in 4K or just theoretical streaming content? Because I thought the OLED dock could already stream 4K content)
The OLED model should theoretically be capable of having a max output resolution of 4K 60 Hz in TV mode since all four lanes of DisplayPort 1.2 signals from the PI3USB30532 chip are connected to the Tegra X1+. (But in reality, the max output resolution in TV for the OLED model is 1080p 60 Hz like with the Nintendo Switch.)

The Nintendo Switch on the other hand is only capable of having a max output resolution of 1080p 60 Hz since only two lanes of DisplayPort 1.2 signals from the PI3USB30532 chip are connected to the Tegra X1.
 
The OLED model should theoretically be capable of having a max output resolution of 4K 60 Hz in TV mode since all four lanes of DisplayPort 1.2 signals from the PI3USB30532 chip are connected to the Tegra X1+. (But in reality, the max output resolution in TV for the OLED model is 1080p 60 Hz like with the Nintendo Switch.)

The Nintendo Switch on the other hand is only capable of having a max output resolution of 1080p 60 Hz since only two lanes of DisplayPort 1.2 signals from the PI3USB30532 chip are connected to the Tegra X1.
Thank you for the clarification! So, in short there's still no 4K viable from either dockable Switch that's currently on the market? For future hardware, it might be able to use the OLED dock to allow for a 4K signal to pass through but that's about it?
 
So, in short there's still no 4K viable from either dockable Switch that's currently on the market?
That's correct.

The reason why all four lanes of DisplayPort 1.2 signals being connected to the Tegra X1+ on the OLED model is noteworthy is because that's proof that Nintendo could have considered making the OLED model a mid-gen refresh at one point.
 
The pixel gap between the Switch 2 and modern TVs being at least 4x and very possibly 9x is honestly pretty awkward for a hybrid system, we'll see what they come up with.
awkward to who? high density screens are still expensive. hell, you don't even see games rendered that high on mobile anyway. it's only recently AAA mobile games do 1080p
 
I always find it a little surprising that the Switch can be criticized so much on a technical level when it embodies in many respects Nintendo's most ambitious hardware since the Gamecube.
 
I always find it a little surprising that the Switch can be criticized so much on a technical level when it embodies in many respects Nintendo's most ambitious hardware since the Gamecube.
as long as there are higher performance options out there, someone will ask, "why aren't you doing that?"
 
Last edited:
I'm curious if and/or when we see consumer APUs from AMD using chiplets. For the moment they're sticking with monolithic dies, which I can see making sense from a power consumption point of view (communicating over an interposer will consume more power than communicating within a monolithic die) and from a cost point of view, as their APUs are typically aimed at the lower end of the market. The main thing chiplets give AMD is the flexibility to deliver a wider range of products within a limited R&D budget. For example with Ryzen they could tape out a single die and be competitive with Intel from entry-level desktops all the way up to high-end server chips. I could see them getting to a place where a high-end APU becomes viable even if there's not a huge market for it because they've already got all the necessary chiplets ready and it's just a matter of sticking them together on an interposer.

In the console space, the place where I'm most interested in seeing the impact of chiplets is memory. As we've discussed in this thread, one of the main limiting factors on the performance of a Switch form-factor device is the memory. LPDDR5(X) only goes so fast, and going wider with the interface is just going to add cost, power consumption, and motherboard complexity. The only viable way to get better bandwidth without sacrificing power consumption is by moving to a wider, slower interface, and the only way that's viable is if you ditch the motherboard traces and move the memory onto an interposer with the chip. That is, adopt HBM or something like it.

HBM has faded away from the consumer space, but I wouldn't be surprised if we start seeing it (or a variant of it) re-appear in the next few years. When HBM debuted on AMD's R9 Fury back in 2015, Ryzen was still a couple of years away, and chiplet-style packaging was very rare, limited to exotic products like the Wii U. With chiplets becoming the standard, the previously prohibitive packaging costs of HBM will come down, and with TSVs also becoming more commonplace in consumer chips, I would expect that the cost of manufacturing the HBM stacks themselves will also likely decrease. Another factor is that the move to chiplets in the GPU space opens up HBM to the consumer market where it makes the most sense: laptop GPUs.

Laptop GPUs suffer from GDDR6's high power consumption, but as they typically share the same die as desktop parts, it hasn't been viable to switch to HBM unless the desktop lineup does as well, where the power consumption and reduced board space of HBM aren't nearly as beneficial. A couple of GPUs have been designed with HBM explicitly for the laptop space, namely AMD's Pro Vega 20, and the bizarre Kaby Lake G (which in retrospect seems like UCIe's dream), but they're very niche products. With AMD moving memory interfaces onto their own chiplets, though, it will now be viable for them to use a single GCD across both GDDR6 and HBM based products. Rather than having to tape out a new version of an entire monolithic die, they just have to tape out one HBM interface chiplet on a cheaper process, and it can be re-used multiple times across different products. So when AMD comes to produce laptop chips from their Navi 31 and Navi 32 dies, they could in theory offer versions with HBM memory and better power-efficiency. They could also, if they wanted, add an extra desktop product above the RX 7900 that swaps out GDDR6 for a much higher-bandwidth HBM memory pool.

Nintendo is kind of lucky with [redacted] in that they've been able to double the memory bus width over the original Switch, and when combined with the improvements from LPDDR4 to LPDDR5 and improvements in GPU bandwidth efficiency they've got room to make a sizeable jump in performance without being severely constrained by bandwidth. With whatever their successor to [redacted] is, I don't know if that will be the case. Doubling the interface width again seems pretty unlikely. Obviously it's foolish to try to predict Nintendo's actions this far in advance, but hypothetically if they were to try to make a successor to [redacted] in a similar form-factor with a significant jump in performance, I'm not sure how they would manage to do that without moving memory on-package with the SoC.

Nintendo, incidentally, are the only console manufacturer with experience of chiplet-style technologies, with their CPU and GPU combined on an MCM in the Wii U. Of course, that almost certainly added significantly to the cost of the Wii U, so perhaps they don't consider their experience with the technology a positive one.



I think the fact that their chiplet-based laptop CPUs are a minimum of 45W is more a matter of where they're positioned in their line-up than an inherent limitation of the technology. Their chiplet-based parts are all high-end 12-16 core CPUs with very limited iGPUs intended to be used with powerful dedicated GPUs (as you say, chunky laptops). If you go much below ~40W you're primarily looking at laptops without dedicated GPUs, and in that space these chips would be a hard sell, considering their iGPUs would be outperformed by even the most entry-level alternatives in that segment.

I suspect that, all other things being equal, a monolithic die should be more power-efficient than a chiplet setup, but we may get to a point where the difference is so minor, and the economics line up such that chiplet-based APUs make sense even in the low-power end of the market.
Hmm, speaking of on package memory, recently I'm seeing a bit of speculation of Intel doing that for Lunar Lake. It kind of goes hand in hand with Lunar Lake re-using Arrow Lake's CPU architectures, so something's presumably happening with packaging to squeeze out perf/watt. And based off a twitter photo that I actually can't see for myself.
(why yes, I do still believe in Intel as a potential wild card for consoles in the future)

Hmm, since that particular range of laptops would be expected to be paired with discrete GPUs... that probably brings the total power draw up to potentially the 3 digit range. So at that point, the difference between monolithic and current version of IFOP (Infinity Fabric On-Package) probably isn't relatively significant, so sure, why not. But sticking with monolithic for the lower ranges suggests to me that it's still not quite there yet. Probably another iteration or two to go?

A little recap for the readers:
Zen 2/3 era: checking with this page, the IFOP's energy efficiency was under 2 pJ/bit. The IO die itself was also on Global Foundries 14/12nm. And the IF links themselves were only either on at 100% power or off. As far as I'm aware, all Zen 2/3 era laptops are monolithic, so I'd conclude that this version of IFOP was a no go.

Zen 4 era: I don't think that AMD has disclosed their estimate for the efficiency of this recent version of the IFOP. Still, the IO die has been moved down to TSMC N6 (ought to be a pretty nice upgrade). And checking with this page, seems like moving down node(s) allowed AMD to go narrow/faster with the IF links to achieve the same bandwidth for lower voltage/less power. And there's that addition of intermediate power states (ie, can turned on without going at 100%, full blast). Seems to be enough improvement to try with the Dragon Range laptops at least? But I don't get the impression of the IFOP being quite at the point for the lower brackets.

Intel's aim of 0.2-0.3 pJ/bit for their interconnect would be interesting to see. And more or less necessary to pull off, considering that all of their consumer stuff is moving to tiles, IIRC?
 


Seems we'll be getting the Zelda OLED tomorrow. OP posted a picture of the new SKU in the comments, too.

lfp8d2lv7dqa1.jpeg
 


Seems we'll be getting the Zelda OLED tomorrow. OP posted a picture of the new SKU in the comments, too.

lfp8d2lv7dqa1.jpeg

I wonder
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Is there any chance that docked mode could produce 4x as many pixels (as compared to 2.25x for the current Switch) or is that simply too big of a jump for most chips when given more electricity.
It's the kind of gap that would probably mean intentionally nerfing undocked mode.

EDIT: But really, this is exactly the kind of problem DLSS is meant to solve. "Oh, you've got way too many pixels on your display to properly render all of them? I'll take care of that well enough."
 
Nintendo, incidentally, are the only console manufacturer with experience of chiplet-style technologies, with their CPU and GPU combined on an MCM in the Wii U. Of course, that almost certainly added significantly to the cost of the Wii U, so perhaps they don't consider their experience with the technology a positive one.

This had precedence in the Gamecube and Wii, both having a MCM GPU containing distinct SRAM and EEPROM chips, although a later revision of Wii's merged the SRAM with the main GPU chip.
index.php
 
I think there will be a lot of demand for a native Switch 2 port of TotK that has a native resolution of 720p/1080p handheld (with DLSS acting as AA here), 4K docked, 60 FPS, less pop-in, and shorter load times.

Like... a lot of demand, lol.
 
Best part was probably the end when they announced the Oled, slowly showing off the design like it's some super exciting unexpected thing when... you know...
 
I hope y'all don't try to make statements about the game's performance and graphics from a bad compressed YT video, made from a badly captured video from Nintendo, no?

I'm not saying this game will be 1080p/60fps, but it also won't be as bad as you saw just now.
 
IQ does have me a bit concerned, but the game still has a bit to go before it's out. YouTube also doesn't do well with anything with a lot of particle effects or a lot of different colors. I wouldn't doubt that resolution dropped, but I think a lot of it is also just YouTube compression making the game look worse than it does.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom