Alright, the stuff about Drake and 2023/2024/2025? In the grand scheme, this is short term and narrow/Nintendo specific. Getting outside of that, there's longer term, industry-wide
doomshit lurking. And I'm a loon who alludes to or even explicitly typed about this every so often here on Fami.
First off, consider this triangle of traits of a console:
1. Performance - what are people getting by buying this device? What are people
expecting from this brand and device? For example, what do people expect from a PS 5/6/7?
2. Power draw - this affects the form. The PS5 is as large as it is to handle the heat of its 200 watts. Would we then expect a PS6 to be smaller than a PS5, the same size as a PS5, or
be even larger than a PS5?
3. Price - Would we expect a PS6 to remain at $500 USD? Or would we expect a price hike to...oh I dunno, $600? $700? Even higher?
Next, there's a couple of things we're dealing with:
Dennard scaling died back in the 2000's. That scaling was basically 'perf keeps climbing while power draw stays suppressed'. Notice the climb in power draw for the stationary consoles since then? Yea...
Now, one way to try to work around that is to go wider (lob more transistors at it) to squeeze out more perf/watt. But...
Transistor/$$$ is heading the wrong way. In turn, that does not bode well for perf/$$$.
I'll get my conclusion out of the way first:
A PS6 sold in 2028 at $500 USD and the same size as the PS5 will, to the general audience, relative to the PS5 itself, appear less impressive than the PS5 appears relative to the PS4.
The primary reason I say that, of course, goes back to the foundries.
PS4 was initially manufactured on TSMC's 28nm process. PS5 is on some variation of TSMC's 7FF/N7. ISO-perf power draw-wise, according to TSMC's claims: 28nm->16FF is -70%. Then 16FF->7FF/N7 is another -60%.
For a prospective PS6 in 2028? Some variation of N2 is my guess as the best option there. That's scheduled for volume production in 2025, so I'd expect N2's successor to start up in either 2027 or 2028 (so it'd be either too bleeding edge for the money side to work or outright not available). N7->N5's ~-30% power. N5->N3E's -30 to -35%. And as of June 2022, N2 is projected to be offering -25 to -30% power compared to N3/N3E.
If one were to take TSMC's claims at face value, power draw-wise, going from N7 to N2 is somewhere in the ballpark of going from 28nm to 16FF+. Now, let's say some N2 refinement/variant gets used instead of base N2. Absolutely plausible, as TSMC has already announced that the next new approach to power delivery will come with a later version of N2 instead of the base version. But I probably wouldn't expect it to reduce power draw so much to be near -60%.
Btw, reminder that performance @ ISO-power scaling involves smaller numbers. Short version: know how the PS5's CPU can run at a bit over double the frequency of the PS4's?
That's not happening again.
Alright, moving on from power draw and now to look at area scaling. This is annoying to figure out since TSMC changed up how it reports this sort of thing in recent years. Looking at the chart from
this page, logic area reduction going from N7 to N3 should be in the ballpark of 16FF+ to N7 actually. In a vacuum, that's not too shabby, actually. Buuut, there was also a 50% logic area reduction going from 28nm to 16FF, according to
this. So, can N3->N2 match that? Let's take a look at
this. Well, first off, the aforementioned change in reporting style; mixed chip density instead of area reduction. Second... if N5->N3 is ~1.3x mixed chip density, then N3->N2's 1.1x does... not sound good. I don't think that N3->N2 can come close to matching 28nm->16FF.
What makes this an issue is the trend of price climbing with each successive node. Checking with Thraktor's post
here, an N7 wafer costs maybe 2.5x that of a 28nm wafer (and that was over 4 nodes). If N5 is really in the ballpark of +50% over N7, then N7->N2's on track to be a relatively pricier jump than 28nm->N7 was, with inferior density scaling. That's potentially baaaad for transistor/$$$.
Alright, moving on from foundry and going into secondary reasons:
I outright do not expect the jump from Zen 2 to ~ Zen 6 to match the gains going from Jaguar to Zen 2. Compounding with frequency not anywhere near doubling again, PS5->PS6's CPU improvement will appear lackluster compared to PS4->PS5.
GPU-wise... that I'm less absolute with. The big issue here is transistor/$$$, as the main way to inflate GPU grunt is 'MOAR TRANSISTORS'. Outside of that, it's about features and how you utilize your transistor budget. Potentially, AMD could massively improve on things like ray tracing. But at the same time, AMD doesn't seem to be all that interested in spending transistors on specialized cores in lieu of more generalist shaders/traditional rasterizing? Another issue here is the memory bandwidth to feed everything here, but mainly the GPU. The ancillary concerns that stem from
that are of course, $$$ and power draw. GDDR7 is supposed to double the bandwidth of GDDR6 at -25% power, so when not accounting for improvements from DRAM fabrication nodes themselves, +100% bandwidth comes at +50% energy. Hopefully DRAM fabrication doesn't get walled and hey, maybe we can get 2x bandwidth at 1x energy. But I'm less hopeful with $$$. GDDR6 chips had a price hike over 5, and IMO, it's likely for GDDR7 to be even more expensive.
Storage-wise, speed will not jump again like going from PS4's HDD to PS5's ~early PCIe gen 4 NVMe SSD. That's not so much raw technical barrier, but more 'this shit is getting hot'. As in, I don't think that the traditional, set top box console form factor can keep up with adequately cooling newer M.2 form factor NVMe drives going forward. (hell, can the M.2 form factor itself keep up with PCIe going forward?)
That is, let's say that PCIe gen 6 or 7 drives are out by 2028. A PS6 with the size and design of a PS5 would probably need to throttle down to peak PCIe gen 4 or 5 speeds, instead of going anywhere near cutting edge speed.
Unless there's a change in form factor of the drive itself and the heat density is lowered? But that's asking us to move to a new form factor within this decade, hah. Such a silly thought.
And that's just focusing on sequential speeds. Improvements in random read/write within NVMe SSDs are likely to be... multiple orders of magnitude less than the jump from HDD to NVMe SSD.