LPDDR4X actually lowered voltages too, but yea, 5X appears to basically just higher clocks/supporting those higher clocks (like, signal integrity/reliability improvements are nice, but those are in service of those higher clocks)
How does an A78 CPU compares to a Jaguar and a current gen console?
Clock for clock, A78 will lap the Jaguar. Potentially two laps depending on the exact workload.
IIRC, by Geekbench, the A78 is actually slightly better clock-for-clock against the CPU in PS5/Series (remember, although they are using Zen 2, they're using monolithic Zen 2, not chiplet. Monolithic Zen 2 has less cache, thus hindering performance relative to the chiplet version).
Due to clocks, I'd expect the CPU to occupy some area in the middle between PS4 and PS5.
I am willing to make this prediction:
A game design that can run as 'acceptably playable' on the PS4 will absolutely be doable on the NG. The gap in CPU and RAM is substantial enough that if something that can run fine on the PS4 fares worse on the NG, I 100% blame the developers.
A game design that can kinda, sorta, but
not really be 'acceptably playable' on the PS4, should be at least 'acceptably playable' on the NG.
A game design that's certainly beyond the reach of the PS4 but is still easy for the PS5, should be within reach of the NG with compromises, relative to the PS5. The amount and types of compromises vary, but I think that the foundations of the game should be able to remain intact?
A game design that's pushing the PS5's CPU
hard should not be expected to have a version on the NG without some fundamental design changes.
What would be better (as a hypothetical)?
12 GB of LPDDR 5X
or
16 GB of LPDDR5
First, I will append
this post here so I don't have to do much retyping.
But most pertinently, I'll quote this block:
So, this is my perspective: I think that your bandwidth needs scales with the amount of work you're trying to do.
Sometimes you see us referring to bandwidth:compute ratio, right? X amount of GB/s per Y amount of TFLOPS.
FLOPS are floating point operations per second. A measurement of computation/work; makes sense, right? But keep in mind that the calculation is basically X hardware times Y clock frequency, ignoring minor details. The key here is that there is an implicit assumption here. The assumption is that all of the clock cycles are spent working. That no cycles are wasted away, just sitting around twiddling your thumbs cause you're waiting on data/instructions. So we know that we need adequate bandwidth.
But conceptually, there is an upper limit, right? You cannot go above working 100% of the time. Additional bandwidth past that point doesn't do anything.
So, when I express, or smash 'Yeah!' for, posts that basically say 'more bandwidth plz!', I'm actually implying an expectation towards the upper end on the capability for work. That is, I think that CPU & GPU grunt will be more on the upper end of what our expectations, and thus, I think that memory bandwidth needs will scale accordingly.
But that also means that you can absolutely get away with less bandwidth if your raw working/computing potential is lesser in the first place.
Now, in handheld mode, the RAM will be clocked down enough such that the distinction between 5 and 5X does not matter. So we're only going to care about it in relation to docked mode.
The higher you expect clocks to be while docked, the more weight you give to bandwidth in order to feed the GPU adequately. Eventually, at some point, you figure, "Ok, based on the competitive landscape, 12 GB is
good enough, and my GPU seems to able to run hard enough that I could see some gains from more bandwidth."
Conversely, if you don't expect docked clocks to be all that high, you probably figure, "Well, the amount of bandwidth I have now with LPDDR5 is fine/more than fine for how hard my GPU is going. Additional bandwidth might help some more, but diminishing returns are kicking in. Maybe I can do more things by instead spending on raising RAM quantity to 16 GB."
Now I'm gonna go off track a bit by going into rambling mode/spitballing my interpretations of a couple of things.
My interpretations are not necessarily the correct perspective, but hopefully they're interesting at least. If it gets the gears in some readers' minds turning to produce interesting "I don't quite agree; here's
my take" responses, the discussion overall gets more fruitful... right?
First one's quick. To re-iterate, memory bandwidth is functionally in service to your CPU/GPU trying to work. In gaming terms, this is usually in relation to what you're seeing on the screen immediately. The frames per second. How pretty things are. The effects. Physics. AI behavior.
The other thing's the longer one. RAM quantity. More recent thing I'm thinking about due to the recent discussion.
RAM serves to hold data/instructions for you to work with while skipping accessing storage, because storage is relatively horridly slow/unresponsive.
Ideally, you have enough RAM to hold everything your game requires, but that's really doable for the smaller projects, right? So, assuming that storage
must be accessed eventually, I will verbalize this lens:
Your time in a game can be divided into three categories.
1. It is acceptable/normal for 'loading time' (ie reading from storage/physical game) to occur here. This is the opportunity to replace a significant amount of the contents in RAM.
2. People are ambivalent/neutral/lacking strong feelings about 'loading time' here. This is opportunity to replace a low to moderate amount of the contents in RAM.
3. It is absolutely not acceptable for 'loading time' to intrude on the playing experience. So during this time, there's probably no storage access going on. Or, maybe there's opportunities here and there to sneakily replace low amounts of RAM contents without noticeably impacting the player's experience.
For example, think of fighting games. When you are in between matches, there being loading time is normal enough (or rather, loading into a match is normal enough). When you are in between rounds, I... don't think there are really strong feelings either way. There's room for a small amount of downtime in between rounds. But when you are in the middle of a round, you absolutely don't want tangible 'loading time' going on.
Another example would be level/stage based games. It is acceptable to have loading time in between stages. It is
potentially okay to have slight amounts of loading time in between sections or during room transitions within a stage (fitting within category 2). But when you're actively playing, you're squarely within category 3, you don't your experience to be noticeably interrupted.
So, what could more RAM do for you.
Naturally, having more RAM allows for the stretches of time in between 'loading time' to have... more. Bigger stages and/or more content in the stages.
It could also potentially just cut down on some loading time
overall. For example's sake, let's go with 10 GB usable versus 14 GB usable. Now let's say that a playthrough, or a subset of a playthrough, of a game would end up requiring going through maybe 20 GB of memory. If you have 10 GB usable, then over the course of that playthrough (or subset), an additional 10 GB needs to be swapped in eventually. If you have 14 GB usable, you'd only need an additional 6 GB to be swapped in.
And I'm sure there's neat tricks that serve to cover for weakness in other areas. I think that
@oldpuck has mentioned before that you could do things like pre-calculating some things, save the results, and just bring them out when you need them, which saves CPU cycles.