- Pronouns
- He/Him
They actually did do this.In no possible timeline would they end tx1 production this year. Even less in January.
It was just TX1 and not TX1+ as was indicated.
They actually did do this.In no possible timeline would they end tx1 production this year. Even less in January.
That’s irrelevant to Nintendo enthusiasts, and it didn’t require any insider knowledge because nvidia announced it forever ago.They actually did do this.
It was just TX1 and not TX1+ as was indicated.
Right I know, part of me just thinks that was the basis for his claim that somehow changed to TX1+ in a game of telephone.That’s irrelevant to Nintendo enthusiasts, and it didn’t require any insider knowledge because nvidia announced it forever ago.
That’s very possible what’s happened yes, but if it was I would like thugs to just say so. Now that we got him hereRight I know, part of me just thinks that was the basis for his claim that somehow changed to TX1+ in a game of telephone.
Also I love arguing semantics so it irks me a bit when I see people call TX1+ just TX1.
Geometry Engine is just what they call the Primitive Shaders. PS5 has what you would call RDNA1.5. Has some 2 features, but has some other features that have it closer to RDNA1, based on analysis of shots and missing stuff. The PS5 zen 2 also has a smaller FPU than the desktop variant, goes to show how they customized it. It’s for their needs.I don’t believe ps5 actually has mesh shaders. They have some custom “geometry engine” with similar functionality.
In a way, mesh shaders and primative shaders aim to accomplish a similar task (triangle efficiency). One of which is through culling. And unlike primative shaders, mesh shaders is now standardized.
This is about as far as I understand it, anyway. Also, UE5's nanite is similar to mesh shaders, but takes a different path
Still really like this demo
It was NSO related but yes the TX1 ( is indeed end of life ) and yes I don’t know bout this point ( it wasn’t accurate or misinterpreted )In no possible timeline would they end tx1 production this year. Even less in January.
Well the Vulkan implementation is based on Nvidia code they made for Turing, which is based on the hardware that also accelerates mesh shaders.Geometry Engine is just what they call the Primitive Shaders. PS5 has what you would call RDNA1.5. Has some 2 features, but has some other features that have it closer to RDNA1, based on analysis of shots and missing stuff. The PS5 zen 2 also has a smaller FPU than the desktop variant, goes to show how they customized it. It’s for their needs.
None of the consoles really use the full RDNA2 feature set, probably because of cost reasons and timing, other probably legal, or because of not needing all of them.
I believe that MShaders does it with more efficiently, right?
AFAIK, PShaders and MShaders are related but aren’t the same thing, MS is an, well, MSFT technology and Sony doesn’t use DX at all in their software stack which is where the MS come to play. The GE(PShaders) seems like it could be meant to be the Sony version of that, without crossing into legal territory.
Probably if Nintendo used the hardware feature, they would brand it their own thing. Or modify it.
I suppose it becomes an issue when devs start programming for it?
Edit: what I mean, is that it seems like MS is easier to program for and has more going for it than the PS which is older and non-standard like you said.
Tx1 yes. Tx1+ that Nintendo have used since 2019 no.TIt was NSO related but yes the TX1 ( is indeed end of life ) and yes I don’t know bout this point ( it wasn’t accurate or misinterpreted )
I know … last year he said both and that ( now ) wasn’t accurate.Tx1 yes. Tx1+ that Nintendo have used since 2019 no.
It doesn’t work like that.They should only be able to double the power from 16nm to 8nm wouldn’t they?
no. you'll never get perfect scaling like that, even ignoring more important factorsThey should only be able to double the power from 16nm to 8nm wouldn’t they?
DLSS 4k is basically just as good, a normal person wouldn’t be able to the difference between 4k and DLSS 4kI’d much rather have a stable 1080p 60fps portable device than half ass 4K
But in portable mode? I wonder how much power that would draw since not even Steam Deck is doing that without raising the price up to the sky.DLSS 4k is basically just as good, a normal person wouldn’t be able to the difference between 4k and DLSS 4k
Yea, since it the target res is much lower.Is DLSS in portable possible?
Feels like the gap between portable and docked would be too big otherwise.
I’m thinking portable mode won’t use DLSS and will just be 720p like the current switch, no one knows thoughBut in portable mode? I wonder how much power that would draw since not even Steam Deck is doing that without raising the price up to the sky.
The only precedence for this wrt Switch is perhaps limiting external developers from using the 460MHz GPU profile in portable mode for a while. I think MK 11 is the first 3rd party game to touch that profile.I’m thinking portable mode won’t use DLSS and will just be 720p like the current switch, no one knows though
It’ll be pretty slow at the process, issue comes in whether developers will code their games that can take advantage of the feature or not, and when it’s not feature parity 1:1 devs tend to not waste time implementing itWell the Vulkan implementation is based on Nvidia code they made for Turing, which is based on the hardware that also accelerates mesh shaders.
Considering that mesh shaders are compute code, ps5 might still be able to do it, just without hardware acceleration. But anyone would just use PS's implementation of primative shaders for optimal performance
I don't think the differences in mesh shader implementation between Series and PS5 will be all that big. As we see from nanite, making a software solution is viable and possibly having it accelerated is doable. But moving to mesh shaders is a big change in the render pipeline.It’ll be pretty slow at the process, issue comes in whether developers will code their games that can take advantage of the feature or not, and when it’s not feature parity 1:1 devs tend to not waste time implementing it
Steam Deck isn’t really a fair comparison as it doesn’t have dedicated hardware accelerator on the silicon for matrix math calculations that are necessary for a feature like DLSS. XeSS is usable but will obviously be slower than the dedicated hardware on Intel GPUs. And AMD doesn’t seem like investing in their own ML solution, opting to use a spatial solution instead which isn’t hardware intensive at all.But in portable mode? I wonder how much power that would draw since not even Steam Deck is doing that without raising the price up to the sky.
It would be the difference between effectively an RDNA1 feature and an RDNA2 feature that leverages a graphics API that is widely used, GE would need to be seen how well it compares.I don't think the differences in mesh shader implementation between Series and PS5 will be all that big. As we see from nanite, making a software solution is viable and possibly having it accelerated is doable. But moving to mesh shaders is a big change in the render pipeline.
You Sir are a gentleman and a scholar.No, sadly. Couple caveats to consider.
A12X was on the 7nm process at the time, Dane will be on 8N, which is supposed to be an improved version of 8LP(x). Node wise, it is already outclassed in logic.
CPU that the A12X has is clocked pretty high, and it curbstomps the Dane model CPU, it being 150% the multi-core performance that the Dane model would ever be. Unless Nintendo goes ballsy and clocks it pretty high, or just high enough to at least trade decent blows with the A12X. Single core score? Forget it. Though, this doesn’t really matter to be honest on a game console since they are always doing multi-threaded applications anyway. Single core matters more for phone things
GPU-wise, well this part I do expect it to be pretty competitive, but keep note its on a lesser node (presumably). What the iPad does is on a more efficient process, and does so at 15 watts portably while the switch would not be using 15W in portable, likely around half of that. But it doesn’t matter since the GPUs in both are better than the GPU in the XB1 on paper specs.
The Dane will likely use a 4310mAh battery, while the iPad can use something like twice of this right? It’s less constrained by the power draw and can power higher yet still manage the battery life of the Mariko unit on the longer end.
HOWEVER! A couple points of contention, the iPad is not meant for long gaming sessions unlike the Dane model. It has no active cooling, GPU being better on paper does not mean it’s better in real life performance, the thing can throttle down, though mobile devices have gotten pretty good with this regard.
Dane will be at a unique place, it will be more modern yet old at the same time and have a prettty heavy lean on the GPU side with ML hardware that allows it to push well above us weight.
Nintendo can possibly have a denser battery for the same size, like in the Samsung phones, that allows them to have a long battery life, slightly higher clocks which makes it more performant before the docked mode boost.
Current mariko aims to draw only 7W I think, Erista drew like 12W or so portably. iPad draws 15W to perform its tasks (if I understood what you said correctly) at peak performance, but likely clocks down for energy saving reasons. Dane doesn’t need to do that clock switching, it’s a constant device.
It’s why I do hope they lean more onto the CPU being clocked modestly well at the sake of the GPU speeds. Should tide them over until the next platform rears it’s ugly head and we have the 7th speculation thread
A main takeaway is that this leans more into Apples to Oranges comparison.
It wouldn’t really work like that. For starters, we are still using the OG switch config for the 20nm, the 16nm/14nm used the more efficient process for battery life plus more efficient RAM. The 8nm is an improved version of the 10nm.They should only be able to double the power from 16nm to 8nm wouldn’t they?
I think they can pretty comfortably if Dane pans out to have specs even remotely close to those speculated. It’ll just be a time issue, games will take a much longer time to develop so expect that .You Sir are a gentleman and a scholar.
I am humbled that you take the time to come out with such a lengthy answer with an intelligible vocabulary to boot. While I understand that passively cooled mobile devices are unlikely to run at peak performance all the time (and thus, that the iPad can't reach the power of a Xbox One sustainably), I still hope that whatever the next Switch launches with can run Nintendo's first party line-up natively at a level of quality close to the cross-gen PS4/PS5 and Xbox One/Series games.
Halo Infinite looks reasonably good on an Xbox One S and so does a Plague Tale Innocence of the PS4. If the next Mario or Zelda can reach that level of fidelity, we are in for a treat.
I believe TSMC only has a 10 nm* process node, which no company has used since 2Q 2020. And I believe Samsung's the only foundry with a 8 nm* process node.We have two different process nodes. The TSMC and the Samsung one. Both are marketed as “8nm” but in reality they do not function 1 to 1 identically. Nodes are marketing terms anyway.
I wonder about the games that are being made for Dane right now. if they really intended for a 2020 launch, but now had a year+ of more dev time, I wonder how that would affect some games. especially the ones exclusive to Dane SwitchI think they can pretty comfortably if Dane pans out to have specs even remotely close to those speculated. It’ll just be a time issue, games will take a much longer time to develop so expect that .
Wouldn't be surprised if it looks very similar to the Switch OLED, maybe just a bit bigger dimension-wise.So do we think the switch pro will look exactly like the switch with a new chip inside, or will it get a complete redesign?
I imagine that the DLSS model*'s design will be very similar to the OLED model's design. However, I'm not sure if the DLSS model* will use different materials for the housing, or use similar materials that's used for the OLED model's housing. I suppose it depends on how Nintendo plans to price the DLSS model*.So do we think the switch pro will look exactly like the switch with a new chip inside, or will it get a complete redesign?
So I did some research to see how Steam Deck compared to PS4.. Couldn't really get an accurate assessment of games head to, but this video shows how the Steam Deck performs theoretically vs X Series X/S and PS5, which in turn helped me figure out how it compared to PS4. Steam Deck specs is up to 1.6 TFLOps in GPU, 4-core/8 thread up to 3.5GHz CPU, w/ 16GB LPDDR5 (88GB/s bandwidth). The guy doing the video below states that the deck's CPU is a little less than half the power, and its GPU is 1.25X more performant per flop than GCN.
So in other words steam deck portable PS4 in GPU power that plays in 720p, and it should easily able to outperform base ps4 games in GPU and CPU, notably on an 800p screen. Not to mention way faster loading times and amount of RAM.
But I I was only curious about how steam deck performs vs PS4 base, because it makes me think of Switch 2's potential could be! 8nm nvidia ampere gets compared to 7nm Zen 2 a lot, especially in power..
So theoretically 1.6-2 TFLOPs GPU on 15 watts should be reachable and to save battery- 800 GFLOPss in handheld mode ( handheld xbone) w/ 2 TFLOPS in docked mode. For the CPU, the best case scenario when compared to X Series X is gonna be a 3x gap like Switch Vs PS4. And all this on 8-12 GB of LPDDR5 memory on a quad channel memory, but hopefully using full memory speeds to reach 102.4 GB/s bandwidth for $400. 8nm node in Q4 2022/Q1 2023 feels really off, but it would be a full generational leap. And that's not counting DLSS performance.
To me, it depends on how Nintendo handles the ergonomics and weight, especially with the slightly larger Joy-Cons attached.How do you guys feel about a bigger Switch? Assuming the Joycons would only get slightly larger, you could fit an 8.1″ display into a device with the same width as a Wii U Gamepad, which Nintendo felt was comfortable for long play sessions.
Huh, I could have sworn they had an 8nm which is just the 10nmI believe TSMC only has a 10 nm* process node, which no company has used since 2Q 2020. And I believe Samsung's the only foundry with a 8 nm* process node.
* purely a marketing nomenclature for all the foundry companies
I’d suspect they would be using Dane for testing of other titles that can have improvements or probably better optimized titles if they felt pressured. +1 year can do a lot, but it’s COVID times so it would do something most likely.I wonder about the games that are being made for Dane right now. if they really intended for a 2020 launch, but now had a year+ of more dev time, I wonder how that would affect some games. especially the ones exclusive to Dane Switch
So I did some research to see how Steam Deck compared to PS4.. Couldn't really get an accurate assessment of games head to, but this video shows how the Steam Deck performs theoretically vs X Series X/S and PS5, which in turn helped me figure out how it compared to PS4. Steam Deck specs is up to 1.6 TFLOps in GPU, 4-core/8 thread up to 3.5GHz CPU, w/ 16GB LPDDR5 (88GB/s bandwidth). The guy doing the video below states that the deck's CPU is a little less than half the power, and its GPU is 1.25X more performant per flop than GCN.
So in other words steam deck portable PS4 in GPU power that plays in 720p, and it should easily able to outperform base ps4 games in GPU and CPU, notably on an 800p screen. Not to mention way faster loading times and amount of RAM.
But I I was only curious about how steam deck performs vs PS4 base, because it makes me think of Switch 2's potential could be! 8nm nvidia ampere gets compared to 7nm Zen 2 a lot, especially in power..
So theoretically 1.6-2 TFLOPs GPU on 15 watts should be reachable and to save battery- 800 GFLOPss in handheld mode ( handheld xbone) w/ 2 TFLOPS in docked mode. For the CPU, the best case scenario when compared to X Series X is gonna be a 3x gap like Switch Vs PS4. And all this on 8-12 GB of LPDDR5 memory on a quad channel memory, but hopefully using full memory speeds to reach 102.4 GB/s bandwidth for $400. 8nm node in Q4 2022/Q1 2023 feels really off, but it would be a full generational leap. And that's not counting DLSS performance.
It sorta is. Here is an in depth article.Huh, I could have sworn they had an 8nm which is just the 10nm
I’d suspect they would be using Dane for testing of other titles that can have improvements or probably better optimized titles if they felt pressured. +1 year can do a lot, but it’s COVID times so it would do something most likely.
Ampere isn’t compared to Zen 2, it’s compared to RDNA 2 ;P.
In this case, you’d be comparing the Ampere/Lovelace setup or something of the sort to RDNA1.5-1.8 ish of the other consoles.
(Not that it would matter much in this case I suppose)
Also, Ampere is a lot more bandwidth efficient than the GCN counterparts (PS4/XB1), and probably noticeably more bandwidth efficient over rdna despite being much newer, just not as bad as GCN I’ll assume.
I was referring to TSMC, not Samsung.It sorta is. Here is an in depth article.
Nvidia’s Ampere & Process Technology: Sunk by Samsung?
The inefficiency of Nvidia’s Ampere graphics cards in gaming workloads is more attributable to architectural choices than to the Samsung 8N process node.chipsandcheese.com
This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.Hi everyone! Didn't knew that you all had moved to Famiboards. Nice to see everyone!
Was doing some reading on AnandTech Apple A15 breakdown and one thing that impressed me was Apple 32MB SLC. What everyone think are the chances that Nvidia and Nintendo go for a full Arm implementation of L3 cache(8MB) instead of cutting down like Qualcomm and Samsung does? The biggest point against that would be area, given that Dane is 8nm based.
I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.what option do they have that won't blow out the budget? Nvidia always planned for this to be 8nm. maybe if they have though about using someone in addition to Samsung from the start, but we'd probably hear about that by now.
It sorta is. Here is an in depth article.
Nvidia’s Ampere & Process Technology: Sunk by Samsung?
The inefficiency of Nvidia’s Ampere graphics cards in gaming workloads is more attributable to architectural choices than to the Samsung 8N process node.chipsandcheese.com
I'm wondering if the tape out process hasn't been met yet because of possible tweaks to the architecture more-so and not just to the manufacturing process node of choice. RDNA2 definitely caught Nvidia by surprise just as Zen2 and Zen3 caught Intel resting on their laurels, so the advancements of Lovelace are either superficial in scope holding over until Hopper or Nvidia will try and tweak Ampere's shortcomings as much as possible.This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.
I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.
It sorta is. Here is an in depth article.
Nvidia’s Ampere & Process Technology: Sunk by Samsung?
The inefficiency of Nvidia’s Ampere graphics cards in gaming workloads is more attributable to architectural choices than to the Samsung 8N process node.chipsandcheese.com
strange quote. gamers don't care about efficiency, but features. all efficiency does is allow for higher performance (lower power consumption? ha! turn that bitch into a space heater!) to drive those featuresIt remains to be seen if Nvidia will truly fix Ampere’s flaws in the next generation or if they will rely on features like DLSS and a superior real-time raytracing implementation in order to drive sales.
I don't think Dane's likely to be fabricated using a 5 nm process node from Samsung or TSMC since Nvidia's probably going to prioritise fabricating consumer Lovelace GPUs (e.g. AD102) and datacentre Hopper GPUs using a 5 nm process node from Samsung and/or TSMC. And Samsung's rumoured on 5 July 2021 to have yields below 50% for some of Samsung's 5 nm process nodes, which seems to be validated by a Xiaomi executive's comment on lack of capacity being the reason for the Snapdragon 780G, which was fabricated using Samsung's 5LPE process node, being cancelled. And Microsoft was rumoured to be partnering with AMD to design an Arm based SoC for a Microsoft Surface device, which was supposed to be fabricated using a 5 nm process node from Samsung, but is now being fabricated using a 5 nm process node from TSMC, due to low yields.This is where Nintendo or Nvidia could change their minds and use 5/6/7 nm instead of 8 nm. These newer nodes have a significant advantage when it comes to SRAM capacity allowing for a significant performance boost compared to 8 nm which only have half as much SRAM theorical density compared to 5LPE and a third of N5 density. Using a newer node could actually be cheaper if we consider the fact that they would have to use 2 more expensive LPDDR5 modules in order to reach the perf of 2 LPDDR4X one coupled with 5 nm SoC that would have more system cache than a bigger 8 nm one. That's what apple is doing with its A14/A15/M1 products.
I suspect that the delay of Dane + the rumours of it not having been taped out may have been linked to a change in the process node used for it. Nvidia may have asked Samsung to make an new node that would be compatible with 8N the same way N6 is a denser and more power efficient version of N7P due to the addition of EUV layers in the process (as opposed of N7+/N5 which are fully EUV based). The node could be used for entry-level/mid end chipsets products past 2021 due to the 5 nm (TSMC/Samsung) capacity being allocated to high end chipsets the same way they made 12 nm products in 2021.
That won’t happen.Do we have any good estimates on how much it would cost Nintendo to go for a newer node? I wonder how much it cost Nintendo on the R&D side for Dane and then having to scrap that and use a newer node would also add to the expense.
That won’t happen.
Yeah that’s not Nintendo even a little bit.Seems way too expensive to me
Could be half a billionDo we have any good estimates on how much it would cost Nintendo to go for a newer node? I wonder how much it cost Nintendo on the R&D side for Dane and then having to scrap that and use a newer node would also add to the expense.
That's possible. I was quoting the caption in the video (around 2:23), but can't confirm.Is it really two WIFI antennas or one for WIFI and one for Bluetooth? The OG also had two antennas for those, but the Bluetooth one is in the bezel of the screen so most tear downs don’t show it.
Have you noticed any substantial improvements to the Wi-Fi reception for the OLED model?My OLED Model's wifi download speed hovers around 30Mbps, compared to about 20Mbps with my OG Switch. Looking at this Japanese teardown video, it seems that the new LDS antennae help. Yes, you read that right—plural (see my circles below):
There are two antenna connectors near the center:
The big antenna:
The small one:
For comparison, here's the PCB antenna in the OG: