It does look like he had specific screen info a month before, but it doesn't say anything about Nintendo until the day after the Bloomberg article. So he didn't connect the two at the time of his first post.
* Hidden text: cannot be quoted. *
To make a long story short, H1 would still be on the cards with 4N.Is there an association between node and possible release timing i.e. is H1 2023 still in the cards with a smaller node based on parts availability?
I'm aware they can't delay hardware timing by so many months.
Depends on when the SoC's taped out, I think.Is there an association between node and possible release timing i.e. is H1 2023 still in the cards with a smaller node based on parts availability?
I'm aware they can't delay hardware timing by so many months.
not really unless you want some super fresh, new node. but 5nm has been in use for years, and Nvidia has ample product on it for Drake to take advantage of. and Lovelace has been sampled for a while nowIs there an association between node and possible release timing i.e. is H1 2023 still in the cards with a smaller node based on parts availability?
I'm aware they can't delay hardware timing by so many months.
The initial design had to have had a target process node in mind, and that's not something that can easily change at any point after the design has begun. So whatever process node they had chosen back in 2019/2020 which gave us the late 22/early 23 window likely hasn't changed, meaning the window itself likely hasn't changed.Is there an association between node and possible release timing i.e. is H1 2023 still in the cards with a smaller node based on parts availability?
I'm aware they can't delay hardware timing by so many months.
The initial design had to have had a target process node in mind, and that's not something that can easily change at any point after the design has begun. So whatever process node they had chosen back in 2019/2020 which gave us the late 22/early 23 window likely hasn't changed, meaning the window itself likely hasn't changed.
4N belongs to this family. If it’s 4N, it’s 5nm.
Does anybody have a chart of "marketing nomeclature" vs actual node size?4N belongs to this family. If it’s 4N, it’s 5nm.
4N belongs to this family. If it’s 4N, it’s 5nm.
it’s custom only for nvidia, made by nvidiaIt would still be a "newer" revision of the 5nm process though, right? If a chip were designed around the TSMC 5nm from 2020, would that automatically translate to being manufactured on 4N?
4N is an Nvidia specific node in the 5nm family, I'd wager 4N is more likely. Especially given Nvidia would have known ADA would be on 4N when the chip was being designed.It would still be a "newer" revision of the 5nm process though, right? If a chip were designed around the TSMC 5nm from 2020, would that automatically translate to being manufactured on 4N?
Ah, looks like my hopes and dreams for Drake using IBM's 2nm node are dashed. Doomed, indeed.
(But seriously thank you for the responses)
20A sounds WAY more advanced than 2nm!Ugh I hate that we're gonna have to start calling them x angstrom nodes. Makes them look foolish, none of the transistor features can actually be angstrom sized.
Marketing is dumb.
Great info, for the people reading this, remember Switch was tested at 921MHz and ended up being 768MHz, if we get the 1.125GHz GPU, that is 3.456TFLOPs, it will be the same performance tier when docked as XBSS, though slightly below it, it wouldn't be a major difference, and if DLSS 3.0 is available, they could even push 4K 60fps out of it, even if the games feel 30fps, it would look smoother like 60fps (that is basically what DLSS 3.0 does).* Hidden text: cannot be quoted. *
Great info, for the people reading this, remember Switch was tested at 921MHz and ended up being 768MHz, if we get the 1.125GHz GPU, that is 3.456TFLOPs, it will be the same performance tier when docked as XBSS, though slightly below it, it wouldn't be a major difference, and if DLSS 3.0 is available, they could even push 4K 60fps out of it, even if the games feel 30fps, it would look smoother like 60fps (that is basically what DLSS 3.0 does).
the GPU for TX1 was 998MHz for the full clock, it was reduced by ~23%, to 768MHz, this was on the 20nm node that leaked energy, a 1.38GHz clock with a 23% reduction is 1.063GHz or 3.27TFLOPs, but testing on Switch was done at 921MHz and had a reduction of less than 17% for 768MHz, if we are looking at the same, 1.125GHz is pretty close to that, as that is a reduction of slightly more than 18%... Basically 1.125GHz would be in line with reductions we saw to TX1 by Nintendo, it's even slightly more conservative than Switch was.With the clock speeds Nintendo settled on with the X1 for the Switch, people should probably be very conservative with their expectations for the next Switch. They are likely shooting for about 4-5 watts for the SOC in portable mode and 10-12 watts in docked. Nintendo will be far more conservative with clock speeds than most of us would like.
We won't know what manufacturing process Drake is made on until after the new model launches, by which point it won't matter.
4.2w for 660MHz on just the GPU is impossible for 8nm according to Nvidia's estimation tools, the GPU with 6TPC at 624MHz draws 5.6w by itself. Thus this estimation has to be for a more efficient node.* Hidden text: cannot be quoted. *
Does Drake have the necessary hardware for DLSS 3.0?Great info, for the people reading this, remember Switch was tested at 921MHz and ended up being 768MHz, if we get the 1.125GHz GPU, that is 3.456TFLOPs, it will be the same performance tier when docked as XBSS, though slightly below it, it wouldn't be a major difference, and if DLSS 3.0 is available, they could even push 4K 60fps out of it, even if the games feel 30fps, it would look smoother like 60fps (that is basically what DLSS 3.0 does).
for frame generations, noDoes Drake have the necessary hardware for DLSS 3.0?
I wouldn't say no quite yet, again, the OFA in Orin is a unknown factor.for frame generations, no
We’re getting a 5nm Nintendo Switch Advance. Whew lads.4.2w for 660MHz on just the GPU is impossible for 8nm according to Nvidia's estimation tools, the GPU with 6TPC at 624MHz draws 5.6w by itself. Thus this estimation has to be for a more efficient node.
It's unknown actually. Drake's OFA engine is more advanced than Ampere, it's unknown how it stacks up against Ada's, but it is for auto AI, and is the more extreme application, we just don't know what that means exactly, however Frame Generation can technically be done slower on Ampere, and when we are talking about 30fps of frame generations, that is likely possible, given that DLSS 3.0 on Ada can handle 100fps+.for frame generations, no
We don’t really know for sureWell have a pretty good idea though, once we have battery, battery life and clock speeds. At least we know which processes it definitely isn't
TSMC's N6 process node is also a possibility.We’re getting a 5nm Nintendo Switch Advance. Whew lads.
We have Orin, and we have desktop ampere. Theres no way Drakes effiency gains over those makes 8nm twice the performance per watt over those.We don’t really know for sure
And I caution people from jumping to a conclusion at the moment lol.
Not trying to be pessimistic, just that we don’t have a lot of information here.
We also don't know if that information used real measured power draw or some kind of target or estimation.We have Orin, and we have desktop ampere. Theres no way Drakes effiency gains over those makes 8nm twice the performance per watt over those.
running Nvidia's estimation tools for Orin with everything set to low and the gpu off, the system power draw is estimated at 8.5w, with the GPU set to 4TPC and 624mhz, it draws 14.5w, that is 6w for 1024 cuda cores on 8nm at 624MHz, Drake is being estimated to draw just 4.2w at 660MHz on 6TPC or 1536 cuda cores... Without a single doubt, Drake can't be 8nm if the estimation is accurate, and with a disparity this great, it's very very unlikely it wasn't caught.We don’t really know for sure
And I caution people from jumping to a conclusion at the moment lol.
Not trying to be pessimistic, just that we don’t have a lot of information here.
I disagree whole heartedly. There is no way you could estimate Ampere with 12SM to draw 4.2w on 8nm at 660MHz, it's double that.We also don't know if that information used real measured power draw or some kind of target or estimation.
It's not enough info to change our confidence much IMO.
They were also very flexible with those clock speeds as time went on, introducing Boost Loading clocks and even adding more portable profiles with higher than usual power.With the clock speeds Nintendo settled on with the X1 for the Switch, people should probably be very conservative with their expectations for the next Switch. They are likely shooting for about 4-5 watts for the SOC in portable mode and 10-12 watts in docked. Nintendo will be far more conservative with clock speeds than most of us would like.
On top of Ampere hardware simply not being supported by DLSS 3, this project was being developed alongside DLSS 2.x, and that's what was integrated into the NVN2 source at the time of the leak. So they were developing with 2.x in mind, and there's no mention of frame generation in the API, and that is not going to be something tacked on in the last minute of development (software planning at the scale of Nintendo and Nvidia just doesn't work that way) even if the hardware was supported.Does Drake have the necessary hardware for DLSS 3.0?
Without knowing anything about the context of that chart I really don't think it can inform us much. Is the tool using the 4.2W number to estimate the entire GPU power draw? Or maybe just part of it. Does Drake have some major power efficiency improvements that Orin does not? Could it have been an old out of date test done when the GPU had a different configuration with different power requirements?I disagree whole heartedly. There is no way you could estimate Ampere with 12SM to draw 4.2w on 8nm at 660MHz, it's double that.
If it's possible, can't they add it post launch to the SDK?For me it's 8nm until it's confirmed otherwise. /shrug
On top of Ampere hardware simply not being supported by DLSS 3, this project was being developed alongside DLSS 2.x, and that's what was integrated into the NVN2 source at the time of the leak. So they were developing with 2.x in mind, and there's no mention of frame generation in the API, and that is not going to be something tacked on in the last minute of development (software planning at the scale of Nintendo and Nvidia just doesn't work that way) even if the hardware was supported.
if Nvidia isn't too keen on supporting Turing or Ampere with frame generation, I don't think they'll make the effort to support Drake with it. if Nintendo wants FG, they might look towards AMD's solution when that comes outIf it's possible, can't they add it post launch to the SDK?
* Hidden text: cannot be quoted. *
FWIW, a small gallery of examples of PC Witcher 3, low settings, DLSS 360p->720p versus native 720p.
They could. I just don't think it's in the cards. That's extra development on NVN2, on the SDK and OS integration/distribution side, and on the DLSS side, since -- possible OFA capability or no -- DLSS 3 just doesn't support Ampere.If it's possible, can't they add it post launch to the SDK?
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
They could. I just don't think it's in the cards. That's extra development on NVN2, on the SDK and OS integration/distribution side, and on the DLSS side, since -- possible OFA capability or no -- DLSS 3 just doesn't support Ampere.
* Hidden text: cannot be quoted. *
I really wouldn't go that far.We’re getting a 5nm Nintendo Switch Advance. Whew lads.
My issue with this is that we are making ”pick a point that fits my idea” about the provided data, and assumes that this is perfectly aligned with DRAKE. How do we know it’s for Drake? How do we know it’s not for tool profiling? There’s a lot we don’t know here about the SoC and it’s performance characteristics to draw a reasonable conclusion.running Nvidia's estimation tools for Orin with everything set to low and the gpu off, the system power draw is estimated at 8.5w, with the GPU set to 4TPC and 624mhz, it draws 14.5w, that is 6w for 1024 cuda cores on 8nm at 624MHz, Drake is being estimated to draw just 4.2w at 660MHz on 6TPC or 1536 cuda cores... Without a single doubt, Drake can't be 8nm if the estimation is accurate, and with a disparity this great, it's very very unlikely it wasn't caught.
I disagree whole heartedly. There is no way you could estimate Ampere with 12SM to draw 4.2w on 8nm at 660MHz, it's double that.
To be fair, you wouldn’t get any confirmation from Nintendo about thisFor me it's 8nm until it's confirmed otherwise. /shrug
Somehow everybody concluded that TX1+ is using 16nm.To be fair, you wouldn’t get any confirmation from Nintendo about this
You’d have to make an educated guess based on other factors at play, and even then it’s making an assumption.
UHS-II and UHS-III are, as you point out, quite a bit pricier, as are other options other than UFS Card. M.2 storage has a power consumption and size problem, so I leave that out.So I know devs wanted 1 GB/s for PS5/XBS, but what would be the rough equivalent for Drake?
Before I ask anymore questions, how important are write speeds for video games? I assume it would only be used for saving? Do games need to read and write and at the same time? If not, I assume they could use the full-duplex speed?
If they go with SD, is UHS-II full-duplex speed (312 MB/s) fast enough? What about UHS-III (624 MB/s)? But even "just" UHS-II cards seem somewhat expensive (~$50 for 128 GB, ~$100 for 256GB) to me from what I've found on Amazon and not that common? And they seem to cap out at around 250 MB/s read speeds as well, unless you want to spend 3x? for 300 MB/s. And I can't seem to find any UHS-III cards at all, so I guess that's not an option.
But going off of Samsung's website, UFS seems (seemed?) to have been quite a bit more affordable? $59.99 for 256 GB with a max read speed of 500 MB/s, which I'm assuming is/was UFS 1.0/1.1 (doesn't seem to be a difference between the two according to wikipedia) seems like a pretty great deal to me? Would that speed be enough or would they have to go for 3.0 cards (which I assume haven't even been manufactured? On a semi-related note, the voltage is listed as 2.7~3.6V. Is that high for a system that would be running ~7-10 watts in portable mode?
So from my perspective, it seems like UFS is the clear winner here, but there's likely details that I'm no privy to or missing entirely. I feel like if you're going to have consumers spending more for storage on average that it would make more sense to go with a different type (UFS) altogether. Why risk confusing your consumers and have them potentially buy a UHS-I card on accident? And if UHS-II speeds aren't enough and UHS-III doesn't seem to exist at all, isn't UFS all that's left? Unless there's another option that I'm unaware of (I probably am). If it was discussed here, please forgive me for forgetting about it. Oh an thanks to all those answering my sudden plethora of storage related questions.
If the SSDs are already getting hot now, then it doesn't seem like there's much overhead left in terms of amount of watts that can be going into a drive of that form factor. So, if there's a goal of increasing raw sequential read in the following gen, one option is work off of presumed improvements in perf/watt of later generations, but not go anywhere near full throttle. Use some p-state in the middle. Alternatively, see how eUFS develops and go with that. Samsung did announce UFS 4.0 this year, claiming read speeds of up to 4.2 GB/s. Fast forward to the time of 'The Next Generation', maybe there's UFS 5.0 with a doubling to 8.4 GB/s. And UFS having the design principles it has, that should easily offer multiple GB/s per watt (I'm guessing that UFS 3.1 already should hit at least 2 GB/s per watt under typical operating conditions).I seriously do wonder how this is going to play out long-term.
Currently the PS5 uses a custom internal SSD and has a massive cooling for everything including that SSD that gets pretty hot. SX and SS also have appropriate cooking but their SSDs are slower.
So will the next gen consoles be more like the PS5, huge and for cooling all components? Hard to really say at the moment….