Btw, do you mean 2030 here?
Yea, late 2020's to 2030, given how far later they'd have spare resources to start investing compared to Nvidia. Both hardware wise and... the time/money to train their own neural networks are, I guess?
I think maybe 10-12 Cores can be seen/expected later on by that point? Being able to do more within the same power budget would probably be their biggest focus. And it would be several years down the line. Not a huge upgrade mind you, but MS isn’t so wedded to AMD maybe, while PS and Nintendo would be more wedded to their hardware providers. Unless Sony and Nintendo figure out a method similar to MS which is more detached from the silicon and it is actually a virtual XBox environment that speaks to the silicon itself just like how DX12U does for current GPUs and CPUs.
(DX12U was just an example, XBox uses a lower level forked variant of Direct X that Pc users cannot use)
The way I'd see 10-12 cores with Zen would pretty much hinge on AMD changing their cluster size. If the generation of Zen used in a PS6 sticks with 8 core clusters, I don't see AMD specifically creating a 10 or 12 core cluster just for the consoles.
However, bringing up that MS isn't necessarily so wedded to AMD is pretty interesting.
If MS sticks with x86, then there are three companies that have x86 architecture licenses: Intel, AMD, and VIA. Uhh, I probably need to do more reading on Zhaoxin to see VIA's CPU capabilities. But given the need for graphics too, I could probably rule out VIA for now.
Intel... if Intel's Xe can become reasonably competent by the late 2020's, that's... an option. If you want to go above 8 cores, there's something there... but I'm not looking at the big Core cores. No, I'm looking at Atom. If the inter-core latency issues can be fixed, you can maybe do three or four clusters of 4 Atom cores each for 12C/12T or 16C/16T. Gracemont right now is basically at Skylake levels of IPC, but in gaming crippled by inter-core latency. Fix that, add a few more iterations of IPC gains, and a few more iterations of stretching the power-frequency curve out... there's a possibility there that this route ends up being more area and energy efficient for a minor loss in single thread strength. It does hinge a lot on Intel delivering on their foundry roadmap though.
For GDDR vs LPDDR vs DDR I think it’s that DDR has the lowest latency while GDDR has the highest latency. LPDDR has the lowest TDP while GDDR has the highest TDP. In terms of performance LPDDR is the worst while GDDR is the best.
It’s all just specialized RAM in the end. But there are strengths and weaknesses.
Also why Nintendo went with DDR3 instead of GDDR for the Wii U, it had noticeably less latency. Wii on the other hand did use pretty much brand new GDDR at the time, this required an alteration to the silicon of course so it’s not just a die shrink GCN, more to it!
If HBM becomes viable by the end of the generation, hopefully consoles do adopt it. All of them.
Hell, IF Nintendo managed to get HBM for the switch, they wouldn’t have to worry about bandwidth for their device like… ever at that point lol.
Just the CPU I guess.
I just don't think that HBM will reduce in cost fast enough to be worthwhile using in consumer grade products. I need to see them return to usage in consumer GPUs first to change my mind.
Bizarrely enough, I'm not that worried about CPU for a post-Drake Switch style device.
So I wrote earlier that my optimistic expectation for Drake's CPU can be described as:
"On the high/optimistic end, the CPU ought to be able to handle a class of complexity in the middle between the PS4 and PS5/Xbox Series. (that is, if you break up the improvement from PS4/XBO to PS5/Xbox Series into two steps, PS4/XBO->Drake would be one, then Drake->PS5/XS would be the second step)"
So if PS4->PS5 was two steps, ideally Drake ends up a step behind PS5. Now, I think that a PS6's realistic ceiling is one step up from PS5, not two steps. Ergo, a post-Drake Switch needs to only advance one step to maintain distance. Or in other words, the target is PS5 level complexity. And I still have enough confidence in Arm to make that doable within 2 watts on... a N2 refinement/variant. They just need to deliver at least a few percentage points more than 5 each year.
It’s more or less why they are allowing PC releases I suppose.
Which is a pretty funny pivot. That's clearly a relatively recent decision, as it at odds with the hyped up fast storage+compression capability. Any game that
hard requires that capability is automatically cutting off the segment of the PC userbase who are using... PCIe gen 3 NVMe and slower. That's a pretty large segment.