• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

That’s irrelevant to Nintendo enthusiasts, and it didn’t require any insider knowledge because nvidia announced it forever ago.
Right I know, part of me just thinks that was the basis for his claim that somehow changed to TX1+ in a game of telephone.

Also I love arguing semantics so it irks me a bit when I see people call TX1+ just TX1.
 
Right I know, part of me just thinks that was the basis for his claim that somehow changed to TX1+ in a game of telephone.

Also I love arguing semantics so it irks me a bit when I see people call TX1+ just TX1.
That’s very possible what’s happened yes, but if it was I would like thugs to just say so. Now that we got him here :)
 
0
I don’t believe ps5 actually has mesh shaders. They have some custom “geometry engine” with similar functionality.
Geometry Engine is just what they call the Primitive Shaders. PS5 has what you would call RDNA1.5. Has some 2 features, but has some other features that have it closer to RDNA1, based on analysis of shots and missing stuff. The PS5 zen 2 also has a smaller FPU than the desktop variant, goes to show how they customized it. It’s for their needs.


None of the consoles really use the full RDNA2 feature set, probably because of cost reasons and timing, other probably legal, or because of not needing all of them.
In a way, mesh shaders and primative shaders aim to accomplish a similar task (triangle efficiency). One of which is through culling. And unlike primative shaders, mesh shaders is now standardized.

This is about as far as I understand it, anyway. Also, UE5's nanite is similar to mesh shaders, but takes a different path

Still really like this demo


I believe that MShaders does it with more efficiently, right?

AFAIK, PShaders and MShaders are related but aren’t the same thing, MS is an, well, MSFT technology and Sony doesn’t use DX at all in their software stack which is where the MS come to play. The GE(PShaders) seems like it could be meant to be the Sony version of that, without crossing into legal territory.


Probably if Nintendo used the hardware feature, they would brand it their own thing. Or modify it.

I suppose it becomes an issue when devs start programming for it?


Edit: what I mean, is that it seems like MS is easier to program for and has more going for it than the PS which is older and non-standard like you said.
 
Geometry Engine is just what they call the Primitive Shaders. PS5 has what you would call RDNA1.5. Has some 2 features, but has some other features that have it closer to RDNA1, based on analysis of shots and missing stuff. The PS5 zen 2 also has a smaller FPU than the desktop variant, goes to show how they customized it. It’s for their needs.


None of the consoles really use the full RDNA2 feature set, probably because of cost reasons and timing, other probably legal, or because of not needing all of them.

I believe that MShaders does it with more efficiently, right?

AFAIK, PShaders and MShaders are related but aren’t the same thing, MS is an, well, MSFT technology and Sony doesn’t use DX at all in their software stack which is where the MS come to play. The GE(PShaders) seems like it could be meant to be the Sony version of that, without crossing into legal territory.


Probably if Nintendo used the hardware feature, they would brand it their own thing. Or modify it.

I suppose it becomes an issue when devs start programming for it?


Edit: what I mean, is that it seems like MS is easier to program for and has more going for it than the PS which is older and non-standard like you said.
Well the Vulkan implementation is based on Nvidia code they made for Turing, which is based on the hardware that also accelerates mesh shaders.

Considering that mesh shaders are compute code, ps5 might still be able to do it, just without hardware acceleration. But anyone would just use PS's implementation of primative shaders for optimal performance
 
Is DLSS in portable possible?

Feels like the gap between portable and docked would be too big otherwise.
Yea, since it the target res is much lower.

Dlss 480p to 720 for example (assuming the screen iz 720p) is much lighter than 1080p to 4K.
 
Last edited:
I’m thinking portable mode won’t use DLSS and will just be 720p like the current switch, no one knows though
The only precedence for this wrt Switch is perhaps limiting external developers from using the 460MHz GPU profile in portable mode for a while. I think MK 11 is the first 3rd party game to touch that profile.
 
Well the Vulkan implementation is based on Nvidia code they made for Turing, which is based on the hardware that also accelerates mesh shaders.

Considering that mesh shaders are compute code, ps5 might still be able to do it, just without hardware acceleration. But anyone would just use PS's implementation of primative shaders for optimal performance
It’ll be pretty slow at the process, issue comes in whether developers will code their games that can take advantage of the feature or not, and when it’s not feature parity 1:1 devs tend to not waste time implementing it
 
It’ll be pretty slow at the process, issue comes in whether developers will code their games that can take advantage of the feature or not, and when it’s not feature parity 1:1 devs tend to not waste time implementing it
I don't think the differences in mesh shader implementation between Series and PS5 will be all that big. As we see from nanite, making a software solution is viable and possibly having it accelerated is doable. But moving to mesh shaders is a big change in the render pipeline.
 
But in portable mode? I wonder how much power that would draw since not even Steam Deck is doing that without raising the price up to the sky.
Steam Deck isn’t really a fair comparison as it doesn’t have dedicated hardware accelerator on the silicon for matrix math calculations that are necessary for a feature like DLSS. XeSS is usable but will obviously be slower than the dedicated hardware on Intel GPUs. And AMD doesn’t seem like investing in their own ML solution, opting to use a spatial solution instead which isn’t hardware intensive at all.

I don't think the differences in mesh shader implementation between Series and PS5 will be all that big. As we see from nanite, making a software solution is viable and possibly having it accelerated is doable. But moving to mesh shaders is a big change in the render pipeline.
It would be the difference between effectively an RDNA1 feature and an RDNA2 feature that leverages a graphics API that is widely used, GE would need to be seen how well it compares.
 
0
No, sadly. Couple caveats to consider.

A12X was on the 7nm process at the time, Dane will be on 8N, which is supposed to be an improved version of 8LP(x). Node wise, it is already outclassed in logic.

CPU that the A12X has is clocked pretty high, and it curbstomps the Dane model CPU, it being 150% the multi-core performance that the Dane model would ever be. Unless Nintendo goes ballsy and clocks it pretty high, or just high enough to at least trade decent blows with the A12X. Single core score? Forget it. Though, this doesn’t really matter to be honest on a game console since they are always doing multi-threaded applications anyway. Single core matters more for phone things

GPU-wise, well this part I do expect it to be pretty competitive, but keep note its on a lesser node (presumably). What the iPad does is on a more efficient process, and does so at 15 watts portably while the switch would not be using 15W in portable, likely around half of that. But it doesn’t matter since the GPUs in both are better than the GPU in the XB1 on paper specs.

The Dane will likely use a 4310mAh battery, while the iPad can use something like twice of this right? It’s less constrained by the power draw and can power higher yet still manage the battery life of the Mariko unit on the longer end.


HOWEVER! A couple points of contention, the iPad is not meant for long gaming sessions unlike the Dane model. It has no active cooling, GPU being better on paper does not mean it’s better in real life performance, the thing can throttle down, though mobile devices have gotten pretty good with this regard.

Dane will be at a unique place, it will be more modern yet old at the same time and have a prettty heavy lean on the GPU side with ML hardware that allows it to push well above us weight.


Nintendo can possibly have a denser battery for the same size, like in the Samsung phones, that allows them to have a long battery life, slightly higher clocks which makes it more performant before the docked mode boost.

Current mariko aims to draw only 7W I think, Erista drew like 12W or so portably. iPad draws 15W to perform its tasks (if I understood what you said correctly) at peak performance, but likely clocks down for energy saving reasons. Dane doesn’t need to do that clock switching, it’s a constant device.


It’s why I do hope they lean more onto the CPU being clocked modestly well at the sake of the GPU speeds. Should tide them over until the next platform rears it’s ugly head and we have the 7th speculation thread ;)


A main takeaway is that this leans more into Apples to Oranges comparison.
You Sir are a gentleman and a scholar.

I am humbled that you take the time to come out with such a lengthy answer with an intelligible vocabulary to boot. While I understand that passively cooled mobile devices are unlikely to run at peak performance all the time (and thus, that the iPad can't reach the power of a Xbox One sustainably), I still hope that whatever the next Switch launches with can run Nintendo's first party line-up natively at a level of quality close to the cross-gen PS4/PS5 and Xbox One/Series games.

Halo Infinite looks reasonably good on an Xbox One S and so does a Plague Tale Innocence of the PS4. If the next Mario or Zelda can reach that level of fidelity, we are in for a treat.
 
They should only be able to double the power from 16nm to 8nm wouldn’t they?
It wouldn’t really work like that. For starters, we are still using the OG switch config for the 20nm, the 16nm/14nm used the more efficient process for battery life plus more efficient RAM. The 8nm is an improved version of the 10nm.

We have two different process nodes. The TSMC and the Samsung one. Both are marketed as “8nm” but in reality they do not function 1 to 1 identically. Nodes are marketing terms anyway.

And this presumably has more cores of SMs and CPU cores, but they are limited to laws of physics and allowed speed for a battery life that’s acceptable.

You Sir are a gentleman and a scholar.

I am humbled that you take the time to come out with such a lengthy answer with an intelligible vocabulary to boot. While I understand that passively cooled mobile devices are unlikely to run at peak performance all the time (and thus, that the iPad can't reach the power of a Xbox One sustainably), I still hope that whatever the next Switch launches with can run Nintendo's first party line-up natively at a level of quality close to the cross-gen PS4/PS5 and Xbox One/Series games.

Halo Infinite looks reasonably good on an Xbox One S and so does a Plague Tale Innocence of the PS4. If the next Mario or Zelda can reach that level of fidelity, we are in for a treat.
I think they can pretty comfortably if Dane pans out to have specs even remotely close to those speculated. It’ll just be a time issue, games will take a much longer time to develop so expect that :p.
 
We have two different process nodes. The TSMC and the Samsung one. Both are marketed as “8nm” but in reality they do not function 1 to 1 identically. Nodes are marketing terms anyway.
I believe TSMC only has a 10 nm* process node, which no company has used since 2Q 2020. And I believe Samsung's the only foundry with a 8 nm* process node.

* purely a marketing nomenclature for all the foundry companies
 
I think they can pretty comfortably if Dane pans out to have specs even remotely close to those speculated. It’ll just be a time issue, games will take a much longer time to develop so expect that :p.
I wonder about the games that are being made for Dane right now. if they really intended for a 2020 launch, but now had a year+ of more dev time, I wonder how that would affect some games. especially the ones exclusive to Dane Switch
 
So do we think the switch pro will look exactly like the switch with a new chip inside, or will it get a complete redesign?
Wouldn't be surprised if it looks very similar to the Switch OLED, maybe just a bit bigger dimension-wise.

As for materials, it likely would have more Metal on the design, I honestly think they may take a Galaxy Note 4 approach and have a plastic back with Metal on the top/bottom/sides
131392-phones-review-samsung-galaxy-note-4-review-image9-TZRKvCmwlB.jpg
 
0
So do we think the switch pro will look exactly like the switch with a new chip inside, or will it get a complete redesign?
I imagine that the DLSS model*'s design will be very similar to the OLED model's design. However, I'm not sure if the DLSS model* will use different materials for the housing, or use similar materials that's used for the OLED model's housing. I suppose it depends on how Nintendo plans to price the DLSS model*.
 
So I did some research to see how Steam Deck compared to PS4.. Couldn't really get an accurate assessment of games head to, but this video shows how the Steam Deck performs theoretically vs X Series X/S and PS5, which in turn helped me figure out how it compared to PS4. Steam Deck specs is up to 1.6 TFLOps in GPU, 4-core/8 thread up to 3.5GHz CPU, w/ 16GB LPDDR5 (88GB/s bandwidth). The guy doing the video below states that the deck's CPU is a little less than half the power, and its GPU is 1.25X more performant per flop than GCN.


So in other words steam deck portable PS4 in GPU power that plays in 720p, and it should easily able to outperform base ps4 games in GPU and CPU, notably on an 800p screen. Not to mention way faster loading times and amount of RAM.

But I I was only curious about how steam deck performs vs PS4 base, because it makes me think of Switch 2's potential could be! 8nm nvidia ampere gets compared to 7nm Zen 2 a lot, especially in power..

So theoretically 1.6-2 TFLOPs GPU on 15 watts should be reachable and to save battery- 800 GFLOPss in handheld mode ( handheld xbone) w/ 2 TFLOPS in docked mode. For the CPU, the best case scenario when compared to X Series X is gonna be a 3x gap like Switch Vs PS4. And all this on 8-12 GB of LPDDR5 memory on a quad channel memory, but hopefully using full memory speeds to reach 102.4 GB/s bandwidth for $400. 8nm node in Q4 2022/Q1 2023 feels really off, but it would be a full generational leap. And that's not counting DLSS performance.
 
How do you guys feel about a bigger Switch? Assuming the Joycons would only get slightly larger, you could fit an 8.1″ display into a device with the same width as a Wii U Gamepad, which Nintendo felt was comfortable for long play sessions.
 
So I did some research to see how Steam Deck compared to PS4.. Couldn't really get an accurate assessment of games head to, but this video shows how the Steam Deck performs theoretically vs X Series X/S and PS5, which in turn helped me figure out how it compared to PS4. Steam Deck specs is up to 1.6 TFLOps in GPU, 4-core/8 thread up to 3.5GHz CPU, w/ 16GB LPDDR5 (88GB/s bandwidth). The guy doing the video below states that the deck's CPU is a little less than half the power, and its GPU is 1.25X more performant per flop than GCN.


So in other words steam deck portable PS4 in GPU power that plays in 720p, and it should easily able to outperform base ps4 games in GPU and CPU, notably on an 800p screen. Not to mention way faster loading times and amount of RAM.

But I I was only curious about how steam deck performs vs PS4 base, because it makes me think of Switch 2's potential could be! 8nm nvidia ampere gets compared to 7nm Zen 2 a lot, especially in power..

So theoretically 1.6-2 TFLOPs GPU on 15 watts should be reachable and to save battery- 800 GFLOPss in handheld mode ( handheld xbone) w/ 2 TFLOPS in docked mode. For the CPU, the best case scenario when compared to X Series X is gonna be a 3x gap like Switch Vs PS4. And all this on 8-12 GB of LPDDR5 memory on a quad channel memory, but hopefully using full memory speeds to reach 102.4 GB/s bandwidth for $400. 8nm node in Q4 2022/Q1 2023 feels really off, but it would be a full generational leap. And that's not counting DLSS performance.

2TFLOPs is where we are expecting to be the docked power for the GPU.
CPU-Wise, 8 A78Cs is likely going to be the config, which would put it over 250% stronger than the PS4 and Xbone's CPUs. Not quite to the level of the Next-Gen consoles, but far closer to them than they are to the last-gen systems.

Memory, yeah likely 100-150GB/s is the range.

Although DLSS is the big game-changer here, especially if they can break past the 4X Upscale of DLSS Performance mode (IE 6x, or even the full 9x of Ultra Performance).

With DLSS you sort of have to contend with "Virtual TFLOPS" aka the TFLOP number that Dane would be emulating after DLSS.

Even a conservative number like a doubling of effective TFLOPS (2 real TFLOPS, 2 "Virtual TFLOPS") would put it right behind the Series S GPU-wise, and we know DLSS can shoot further, especially if NVIDIA updates DLSS to add a Spatial Upscaler for something like an end-step or Nintendo makes an OS-level upscaler.

In that case, it could surpass the Series S by a fair margin because it could render the game at 720p, upscale that to 1440p VIA DLSS Performance, Then do a Spatial Upscale to 4K to top it off.

And usually, with DLSS Performance mode, it looks about as good as the output resolution, (540p to 1080p looks a lot like 1080p).

So assuming a worst-case scenario for Next-Gen ports, it would still likely outpace the Series S as it could use DLSS Performance mode to render a game at 540p then upscale that to 1080p, and if they add a Spatial component to DLSS or OS-Level, upscale that to 1440p.

So we are talking about a system with an effective output beyond 4.5 Ampere TFLOPs Here

And because it is rendering at a lower internal res for most cases, memory bandwidth becomes less of a problem as the VRAM buffer is being taken up less overall by the resolution.

The only major thing the Series S would have ahead of the Dane SoC would be CPU performance and even then that likely won't become much of a issue until late-gen.

And Storage is something that is sort of unpredictable as they have a large number of options, some of them that very well could match up to the Series S|X SSDs (EX: UFS 3.1 accelerated via RTX-IO would produce about a 4.2GB/s Storage Medium, versus the 4.8GB/s post-velocity Series SSD)
 
0
How do you guys feel about a bigger Switch? Assuming the Joycons would only get slightly larger, you could fit an 8.1″ display into a device with the same width as a Wii U Gamepad, which Nintendo felt was comfortable for long play sessions.
To me, it depends on how Nintendo handles the ergonomics and weight, especially with the slightly larger Joy-Cons attached.
 
0
I believe TSMC only has a 10 nm* process node, which no company has used since 2Q 2020. And I believe Samsung's the only foundry with a 8 nm* process node.

* purely a marketing nomenclature for all the foundry companies
Huh, I could have sworn they had an 8nm which is just the 10nm
I wonder about the games that are being made for Dane right now. if they really intended for a 2020 launch, but now had a year+ of more dev time, I wonder how that would affect some games. especially the ones exclusive to Dane Switch
I’d suspect they would be using Dane for testing of other titles that can have improvements or probably better optimized titles if they felt pressured. +1 year can do a lot, but it’s COVID times so it would do something most likely.
So I did some research to see how Steam Deck compared to PS4.. Couldn't really get an accurate assessment of games head to, but this video shows how the Steam Deck performs theoretically vs X Series X/S and PS5, which in turn helped me figure out how it compared to PS4. Steam Deck specs is up to 1.6 TFLOps in GPU, 4-core/8 thread up to 3.5GHz CPU, w/ 16GB LPDDR5 (88GB/s bandwidth). The guy doing the video below states that the deck's CPU is a little less than half the power, and its GPU is 1.25X more performant per flop than GCN.


So in other words steam deck portable PS4 in GPU power that plays in 720p, and it should easily able to outperform base ps4 games in GPU and CPU, notably on an 800p screen. Not to mention way faster loading times and amount of RAM.

But I I was only curious about how steam deck performs vs PS4 base, because it makes me think of Switch 2's potential could be! 8nm nvidia ampere gets compared to 7nm Zen 2 a lot, especially in power..

So theoretically 1.6-2 TFLOPs GPU on 15 watts should be reachable and to save battery- 800 GFLOPss in handheld mode ( handheld xbone) w/ 2 TFLOPS in docked mode. For the CPU, the best case scenario when compared to X Series X is gonna be a 3x gap like Switch Vs PS4. And all this on 8-12 GB of LPDDR5 memory on a quad channel memory, but hopefully using full memory speeds to reach 102.4 GB/s bandwidth for $400. 8nm node in Q4 2022/Q1 2023 feels really off, but it would be a full generational leap. And that's not counting DLSS performance.

Ampere isn’t compared to Zen 2, it’s compared to RDNA 2 ;P.

In this case, you’d be comparing the Ampere/Lovelace setup or something of the sort to RDNA1.5-1.8 ish of the other consoles. :p

(Not that it would matter much in this case I suppose)

Also, Ampere is a lot more bandwidth efficient than the GCN counterparts (PS4/XB1), and probably noticeably more bandwidth efficient over rdna despite being much newer, just not as bad as GCN I’ll assume.
 
Huh, I could have sworn they had an 8nm which is just the 10nm

I’d suspect they would be using Dane for testing of other titles that can have improvements or probably better optimized titles if they felt pressured. +1 year can do a lot, but it’s COVID times so it would do something most likely.

Ampere isn’t compared to Zen 2, it’s compared to RDNA 2 ;P.

In this case, you’d be comparing the Ampere/Lovelace setup or something of the sort to RDNA1.5-1.8 ish of the other consoles. :p

(Not that it would matter much in this case I suppose)

Also, Ampere is a lot more bandwidth efficient than the GCN counterparts (PS4/XB1), and probably noticeably more bandwidth efficient over rdna despite being much newer, just not as bad as GCN I’ll assume.
It sorta is. Here is an in depth article.

 
It sorta is. Here is an in depth article.

I was referring to TSMC, not Samsung.

However, the article did help with one aspect, back end for a greater node.

It could be that 8N is like 11nm from sammy. And in the case of 8N, Dane can go 7nm LP(x) for samsung and get a die shrink from that.
 
0
Hi everyone! Didn't knew that you all had moved to Famiboards. Nice to see everyone!
Was doing some reading on AnandTech Apple A15 breakdown and one thing that impressed me was Apple 32MB SLC. What everyone think are the chances that Nvidia and Nintendo go for a full Arm implementation of L3 cache(8MB) instead of cutting down like Qualcomm and Samsung does? The biggest point against that would be area, given that Dane is 8nm based.
This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.
what option do they have that won't blow out the budget? Nvidia always planned for this to be 8nm. maybe if they have though about using someone in addition to Samsung from the start, but we'd probably hear about that by now.
I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.
 
It sorta is. Here is an in depth article.


Extremely interesting article and it sounds like Ampere's issues have more to do with poor design vs Samsung's 8N manufacturing process.
If this is something that can be worked out in Lovelace is unclear at this point if they addressed many things, or are depending on TSMC's 5nm process to make up ground and wait until Hopper to truly solve Ampere's flaws.
This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.

I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.
I'm wondering if the tape out process hasn't been met yet because of possible tweaks to the architecture more-so and not just to the manufacturing process node of choice. RDNA2 definitely caught Nvidia by surprise just as Zen2 and Zen3 caught Intel resting on their laurels, so the advancements of Lovelace are either superficial in scope holding over until Hopper or Nvidia will try and tweak Ampere's shortcomings as much as possible.

Could we see a significant amount of architectural differences between desktop Lovelace and Nintendo’s Dane SoC is another question in the matter?
 
It sorta is. Here is an in depth article.

It remains to be seen if Nvidia will truly fix Ampere’s flaws in the next generation or if they will rely on features like DLSS and a superior real-time raytracing implementation in order to drive sales.
strange quote. gamers don't care about efficiency, but features. all efficiency does is allow for higher performance (lower power consumption? ha! turn that bitch into a space heater!) to drive those features

but if Lovelace does fix those issues, which relate to low cuda core utilization (hence why performance uplift doesn't scale with increased core count by the new design unless you're pure compute), then Dane could possibly hit that much harder.
 
igor's lab did a comparison between DLSS and FSR in Avengers. the image comparison was 1440p in Ultra Performance DLSS (480p) vs Performance FSR (720p). it's not exactly a fair comparison since he's using and scene where he's standing around not doing anything. that gives the image temporal stability and makes it look better than when you're playing normally. but it does show that DLSS can work with such low inputs, provided the output is sufficiently high

1440p-native-Details-1320x743.jpg

1440p-DLSS-Ultra-Leistung-Details-1320x743.jpg

1440p-FSR-Leistung-Details-1320x743.jpg


 
0
This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.

I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.
I don't think Dane's likely to be fabricated using a 5 nm process node from Samsung or TSMC since Nvidia's probably going to prioritise fabricating consumer Lovelace GPUs (e.g. AD102) and datacentre Hopper GPUs using a 5 nm process node from Samsung and/or TSMC. And Samsung's rumoured on 5 July 2021 to have yields below 50% for some of Samsung's 5 nm process nodes, which seems to be validated by a Xiaomi executive's comment on lack of capacity being the reason for the Snapdragon 780G, which was fabricated using Samsung's 5LPE process node, being cancelled. And Microsoft was rumoured to be partnering with AMD to design an Arm based SoC for a Microsoft Surface device, which was supposed to be fabricated using a 5 nm process node from Samsung, but is now being fabricated using a 5 nm process node from TSMC, due to low yields.

Outside of the lithography being used, I think Samsung's 7LPP process node used for the Exynos 9825 is technically compatible with Samsung's 8N process node, considering there are very little differences between Samsung's 7LPP process node used for the Exynos 9825 and Samsung's 8LPP process node used for the Exynos 9820.

~

Anyway, Nvidia has acquired Oski Technology, a company specialising in formal verification of silicon, on 8 October 2021. I wonder when's the earliest Oski Technology's expertise will be used for verifying the SoCs for future Nintendo consoles, if Nvidia plans to use Oski Technology's expertise for all of Nvidia's designed chips. Of course, I don't deny the possibility that Nvidia only plans on using Oski Technology's expertise for certain chip designed by Nvidia (e.g. datacentre GPUs, datacentre CPUs, etc.).
 
0
Do we have any good estimates on how much it would cost Nintendo to go for a newer node? I wonder how much it cost Nintendo on the R&D side for Dane and then having to scrap that and use a newer node would also add to the expense.
That won’t happen.
 
My OLED Model's wifi download speed hovers around 30Mbps, compared to about 20Mbps with my OG Switch. Looking at this Japanese teardown video, it seems that the new LDS antennae help. Yes, you read that right—plural (see my circles below):

WDb2Pvz.png


There are two antenna connectors near the center:
Ve3F5E1.png


The big antenna:
VhvX5ZQ.png


The small one:
CIfLf2k.png


For comparison, here's the PCB antenna in the OG:
zqj6n19jwt7e22z.jpg


Edit: @Intoxicate suggested that one of the antennae is for Bluetooth not wifi, which makes sense. I was quoting the caption in the video (around 2:23), but can't confirm its veracity. Either way, in my experience the wifi reception has improved.
 
Last edited:
Is it really two WIFI antennas or one for WIFI and one for Bluetooth? The OG also had two antennas for those, but the Bluetooth one is in the bezel of the screen so most tear downs don’t show it.
 
Is it really two WIFI antennas or one for WIFI and one for Bluetooth? The OG also had two antennas for those, but the Bluetooth one is in the bezel of the screen so most tear downs don’t show it.
That's possible. I was quoting the caption in the video (around 2:23), but can't confirm.
 
0
My OLED Model's wifi download speed hovers around 30Mbps, compared to about 20Mbps with my OG Switch. Looking at this Japanese teardown video, it seems that the new LDS antennae help. Yes, you read that right—plural (see my circles below):

WDb2Pvz.png


There are two antenna connectors near the center:
Ve3F5E1.png


The big antenna:
VhvX5ZQ.png


The small one:
CIfLf2k.png


For comparison, here's the PCB antenna in the OG:
zqj6n19jwt7e22z.jpg
Have you noticed any substantial improvements to the Wi-Fi reception for the OLED model?
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom